Oct 8 20:12:14.924866 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:12:14.924888 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:12:14.924896 kernel: BIOS-provided physical RAM map: Oct 8 20:12:14.924901 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 20:12:14.924906 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 20:12:14.924911 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 20:12:14.924917 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 8 20:12:14.924923 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 8 20:12:14.924930 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 20:12:14.924935 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 8 20:12:14.924940 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 20:12:14.924945 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 20:12:14.924950 kernel: NX (Execute Disable) protection: active Oct 8 20:12:14.924956 kernel: APIC: Static calls initialized Oct 8 20:12:14.924964 kernel: SMBIOS 2.8 present. Oct 8 20:12:14.924970 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 8 20:12:14.924976 kernel: Hypervisor detected: KVM Oct 8 20:12:14.924981 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:12:14.924986 kernel: kvm-clock: using sched offset of 2733843273 cycles Oct 8 20:12:14.924992 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:12:14.924998 kernel: tsc: Detected 2445.404 MHz processor Oct 8 20:12:14.925004 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:12:14.925010 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:12:14.925018 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 8 20:12:14.925023 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 20:12:14.925029 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:12:14.925035 kernel: Using GB pages for direct mapping Oct 8 20:12:14.925040 kernel: ACPI: Early table checksum verification disabled Oct 8 20:12:14.925046 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Oct 8 20:12:14.925051 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:14.925057 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:14.925063 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:14.925070 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 8 20:12:14.925076 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:14.925081 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:14.925087 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:14.925093 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:14.925098 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Oct 8 20:12:14.925104 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Oct 8 20:12:14.925110 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 8 20:12:14.925121 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Oct 8 20:12:14.925127 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Oct 8 20:12:14.925133 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Oct 8 20:12:14.925165 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Oct 8 20:12:14.925172 kernel: No NUMA configuration found Oct 8 20:12:14.925179 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 8 20:12:14.925188 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Oct 8 20:12:14.925194 kernel: Zone ranges: Oct 8 20:12:14.925200 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:12:14.925206 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 8 20:12:14.925211 kernel: Normal empty Oct 8 20:12:14.925217 kernel: Movable zone start for each node Oct 8 20:12:14.925223 kernel: Early memory node ranges Oct 8 20:12:14.925229 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 20:12:14.925235 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 8 20:12:14.925241 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 8 20:12:14.925249 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:12:14.925255 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 20:12:14.925261 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 8 20:12:14.925267 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 20:12:14.925273 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:12:14.925278 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:12:14.925284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 20:12:14.925291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:12:14.925297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:12:14.925305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:12:14.925311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:12:14.925316 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:12:14.925322 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 20:12:14.925328 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:12:14.925334 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 20:12:14.925340 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 8 20:12:14.925346 kernel: Booting paravirtualized kernel on KVM Oct 8 20:12:14.925352 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:12:14.925360 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:12:14.925366 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:12:14.925372 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:12:14.925378 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:12:14.925384 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 8 20:12:14.925391 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:12:14.925397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:12:14.925403 kernel: random: crng init done Oct 8 20:12:14.925411 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:12:14.925417 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:12:14.925423 kernel: Fallback order for Node 0: 0 Oct 8 20:12:14.925429 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Oct 8 20:12:14.925435 kernel: Policy zone: DMA32 Oct 8 20:12:14.925441 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:12:14.925447 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 8 20:12:14.925453 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:12:14.925459 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:12:14.925467 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:12:14.925473 kernel: Dynamic Preempt: voluntary Oct 8 20:12:14.925479 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:12:14.925488 kernel: rcu: RCU event tracing is enabled. Oct 8 20:12:14.925495 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:12:14.925501 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:12:14.925507 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:12:14.925513 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:12:14.925519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:12:14.925525 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:12:14.925560 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:12:14.925567 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:12:14.925573 kernel: Console: colour VGA+ 80x25 Oct 8 20:12:14.925579 kernel: printk: console [tty0] enabled Oct 8 20:12:14.925585 kernel: printk: console [ttyS0] enabled Oct 8 20:12:14.925591 kernel: ACPI: Core revision 20230628 Oct 8 20:12:14.925597 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 20:12:14.925603 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:12:14.925609 kernel: x2apic enabled Oct 8 20:12:14.925618 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:12:14.925624 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 20:12:14.925630 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 20:12:14.925636 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Oct 8 20:12:14.925642 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 20:12:14.925648 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 20:12:14.925654 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 20:12:14.925660 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:12:14.925675 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:12:14.925681 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:12:14.925687 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:12:14.925696 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 20:12:14.925702 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 20:12:14.925708 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 20:12:14.925714 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 20:12:14.925721 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 20:12:14.925727 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 20:12:14.925734 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 20:12:14.925740 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:12:14.925748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:12:14.925755 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:12:14.925761 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:12:14.925767 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 20:12:14.925773 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:12:14.925781 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:12:14.925787 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:12:14.925793 kernel: landlock: Up and running. Oct 8 20:12:14.925800 kernel: SELinux: Initializing. Oct 8 20:12:14.925806 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:12:14.925812 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:12:14.925818 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 20:12:14.925825 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:12:14.925831 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:12:14.925839 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:12:14.925845 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 20:12:14.925851 kernel: ... version: 0 Oct 8 20:12:14.925858 kernel: ... bit width: 48 Oct 8 20:12:14.925864 kernel: ... generic registers: 6 Oct 8 20:12:14.925870 kernel: ... value mask: 0000ffffffffffff Oct 8 20:12:14.925876 kernel: ... max period: 00007fffffffffff Oct 8 20:12:14.925882 kernel: ... fixed-purpose events: 0 Oct 8 20:12:14.925888 kernel: ... event mask: 000000000000003f Oct 8 20:12:14.925896 kernel: signal: max sigframe size: 1776 Oct 8 20:12:14.925903 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:12:14.925909 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:12:14.925915 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:12:14.925921 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:12:14.925928 kernel: .... node #0, CPUs: #1 Oct 8 20:12:14.925934 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:12:14.925940 kernel: smpboot: Max logical packages: 1 Oct 8 20:12:14.925946 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Oct 8 20:12:14.925952 kernel: devtmpfs: initialized Oct 8 20:12:14.925961 kernel: x86/mm: Memory block size: 128MB Oct 8 20:12:14.925967 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:12:14.925973 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:12:14.925979 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:12:14.925985 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:12:14.925991 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:12:14.925998 kernel: audit: type=2000 audit(1728418334.347:1): state=initialized audit_enabled=0 res=1 Oct 8 20:12:14.926004 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:12:14.926010 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:12:14.926018 kernel: cpuidle: using governor menu Oct 8 20:12:14.926024 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:12:14.926030 kernel: dca service started, version 1.12.1 Oct 8 20:12:14.926037 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 20:12:14.926043 kernel: PCI: Using configuration type 1 for base access Oct 8 20:12:14.926049 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:12:14.926056 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:12:14.926062 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:12:14.926068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:12:14.926077 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:12:14.926083 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:12:14.926089 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:12:14.926095 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:12:14.926101 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:12:14.926107 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:12:14.926114 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:12:14.926120 kernel: ACPI: Interpreter enabled Oct 8 20:12:14.926126 kernel: ACPI: PM: (supports S0 S5) Oct 8 20:12:14.926134 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:12:14.926156 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:12:14.926163 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 20:12:14.926180 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 20:12:14.926207 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:12:14.926379 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:12:14.926498 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 20:12:14.926663 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 20:12:14.926684 kernel: PCI host bridge to bus 0000:00 Oct 8 20:12:14.926828 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:12:14.926930 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:12:14.927034 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:12:14.927136 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 8 20:12:14.927272 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 20:12:14.927376 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 8 20:12:14.927475 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:12:14.927616 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 20:12:14.927733 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Oct 8 20:12:14.927838 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Oct 8 20:12:14.927977 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Oct 8 20:12:14.928167 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Oct 8 20:12:14.928288 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Oct 8 20:12:14.928403 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 20:12:14.928520 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.928626 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Oct 8 20:12:14.928738 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.928842 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Oct 8 20:12:14.928958 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.929062 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Oct 8 20:12:14.929486 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.930013 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Oct 8 20:12:14.930136 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.930303 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Oct 8 20:12:14.930423 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.930526 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Oct 8 20:12:14.930651 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.930757 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Oct 8 20:12:14.930870 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.930975 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Oct 8 20:12:14.931091 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:14.931242 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Oct 8 20:12:14.932263 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 20:12:14.932390 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 20:12:14.932542 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 20:12:14.932651 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Oct 8 20:12:14.932756 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Oct 8 20:12:14.932875 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 20:12:14.933002 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 8 20:12:14.933204 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:12:14.933381 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Oct 8 20:12:14.933502 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 8 20:12:14.933640 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Oct 8 20:12:14.933770 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:12:14.933908 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:12:14.934031 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:12:14.937257 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 20:12:14.937391 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Oct 8 20:12:14.937506 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:12:14.937632 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:12:14.937817 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:12:14.937965 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 20:12:14.938114 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Oct 8 20:12:14.938300 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 8 20:12:14.938412 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:12:14.938542 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:12:14.938677 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:12:14.938863 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 20:12:14.939059 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 8 20:12:14.939508 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:12:14.939641 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:12:14.939771 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:12:14.939901 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 20:12:14.940064 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Oct 8 20:12:14.940232 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:12:14.940404 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:12:14.940606 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:12:14.940739 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 20:12:14.940872 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Oct 8 20:12:14.942260 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Oct 8 20:12:14.942402 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:12:14.942528 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:12:14.942650 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:12:14.942667 kernel: acpiphp: Slot [0] registered Oct 8 20:12:14.942795 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:12:14.942906 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Oct 8 20:12:14.943039 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Oct 8 20:12:14.944278 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Oct 8 20:12:14.944549 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:12:14.944757 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:12:14.944925 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:12:14.944944 kernel: acpiphp: Slot [0-2] registered Oct 8 20:12:14.945156 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:12:14.946355 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:12:14.946499 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:12:14.946511 kernel: acpiphp: Slot [0-3] registered Oct 8 20:12:14.946661 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:12:14.946787 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:12:14.946893 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:12:14.946904 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:12:14.946911 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:12:14.946917 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:12:14.946924 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:12:14.946930 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 20:12:14.946936 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 20:12:14.946942 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 20:12:14.946952 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 20:12:14.946958 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 20:12:14.946965 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 20:12:14.946971 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 20:12:14.946977 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 20:12:14.946984 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 20:12:14.946990 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 20:12:14.946996 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 20:12:14.947003 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 20:12:14.947011 kernel: iommu: Default domain type: Translated Oct 8 20:12:14.947018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:12:14.947025 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:12:14.947031 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:12:14.947037 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 20:12:14.947044 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 8 20:12:14.948210 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 20:12:14.948334 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 20:12:14.948482 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 20:12:14.948498 kernel: vgaarb: loaded Oct 8 20:12:14.948505 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 20:12:14.948512 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 20:12:14.948518 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:12:14.948525 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:12:14.948535 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:12:14.948548 kernel: pnp: PnP ACPI init Oct 8 20:12:14.948679 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 20:12:14.948695 kernel: pnp: PnP ACPI: found 5 devices Oct 8 20:12:14.948701 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:12:14.948708 kernel: NET: Registered PF_INET protocol family Oct 8 20:12:14.948720 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:12:14.948732 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 8 20:12:14.948743 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:12:14.948749 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:12:14.948756 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 8 20:12:14.948762 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 8 20:12:14.948774 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:12:14.948788 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:12:14.948802 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:12:14.948809 kernel: NET: Registered PF_XDP protocol family Oct 8 20:12:14.948967 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 20:12:14.949096 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 20:12:14.949224 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 20:12:14.949338 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 20:12:14.949446 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 20:12:14.949589 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 20:12:14.949722 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:12:14.949843 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:12:14.949965 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:12:14.950076 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:12:14.952267 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:12:14.952408 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:12:14.952558 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:12:14.952683 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:12:14.952791 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:12:14.952897 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:12:14.953012 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:12:14.954209 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:12:14.954337 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:12:14.954491 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:12:14.954604 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:12:14.954740 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:12:14.954856 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:12:14.954975 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:12:14.955102 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:12:14.956248 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 8 20:12:14.956385 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:12:14.956528 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:12:14.956641 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:12:14.956744 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 8 20:12:14.956865 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:12:14.956979 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:12:14.957096 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:12:14.959237 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 8 20:12:14.959348 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:12:14.959458 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:12:14.959559 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:12:14.959654 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:12:14.959748 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:12:14.959846 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 8 20:12:14.959941 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 20:12:14.960035 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 8 20:12:14.960185 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 8 20:12:14.960308 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:12:14.960418 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 8 20:12:14.960523 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:12:14.960629 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 8 20:12:14.960727 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:12:14.960833 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 8 20:12:14.960931 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:12:14.961037 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 8 20:12:14.961155 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:12:14.961268 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 8 20:12:14.961368 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:12:14.961480 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 8 20:12:14.961581 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 8 20:12:14.961680 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:12:14.961790 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 8 20:12:14.961895 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 8 20:12:14.961994 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:12:14.962102 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 8 20:12:14.964239 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 8 20:12:14.964344 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:12:14.964355 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 20:12:14.964362 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:12:14.964373 kernel: Initialise system trusted keyrings Oct 8 20:12:14.964381 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 8 20:12:14.964388 kernel: Key type asymmetric registered Oct 8 20:12:14.964395 kernel: Asymmetric key parser 'x509' registered Oct 8 20:12:14.964402 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:12:14.964408 kernel: io scheduler mq-deadline registered Oct 8 20:12:14.964415 kernel: io scheduler kyber registered Oct 8 20:12:14.964422 kernel: io scheduler bfq registered Oct 8 20:12:14.964528 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 8 20:12:14.964637 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 8 20:12:14.964741 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 8 20:12:14.964844 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 8 20:12:14.964947 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 8 20:12:14.965049 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 8 20:12:14.967177 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 8 20:12:14.967299 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 8 20:12:14.967406 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 8 20:12:14.967509 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 8 20:12:14.967617 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 8 20:12:14.967719 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 8 20:12:14.967822 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 8 20:12:14.967924 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 8 20:12:14.968027 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 8 20:12:14.968130 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 8 20:12:14.968155 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 20:12:14.968263 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 8 20:12:14.968373 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 8 20:12:14.968382 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:12:14.968390 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 8 20:12:14.968397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:12:14.968404 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:12:14.968411 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:12:14.968420 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:12:14.968427 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:12:14.968542 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 8 20:12:14.968556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:12:14.968655 kernel: rtc_cmos 00:03: registered as rtc0 Oct 8 20:12:14.968753 kernel: rtc_cmos 00:03: setting system clock to 2024-10-08T20:12:14 UTC (1728418334) Oct 8 20:12:14.968853 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 20:12:14.968862 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 20:12:14.968869 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:12:14.968881 kernel: Segment Routing with IPv6 Oct 8 20:12:14.968908 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:12:14.968929 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:12:14.968950 kernel: Key type dns_resolver registered Oct 8 20:12:14.968968 kernel: IPI shorthand broadcast: enabled Oct 8 20:12:14.968989 kernel: sched_clock: Marking stable (1070013668, 130144168)->(1208219368, -8061532) Oct 8 20:12:14.969010 kernel: registered taskstats version 1 Oct 8 20:12:14.969030 kernel: Loading compiled-in X.509 certificates Oct 8 20:12:14.969048 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:12:14.969069 kernel: Key type .fscrypt registered Oct 8 20:12:14.969089 kernel: Key type fscrypt-provisioning registered Oct 8 20:12:14.969115 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:12:14.969136 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:12:14.970174 kernel: ima: No architecture policies found Oct 8 20:12:14.970199 kernel: clk: Disabling unused clocks Oct 8 20:12:14.970207 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:12:14.970213 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:12:14.970220 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:12:14.970227 kernel: Run /init as init process Oct 8 20:12:14.970237 kernel: with arguments: Oct 8 20:12:14.970245 kernel: /init Oct 8 20:12:14.970251 kernel: with environment: Oct 8 20:12:14.970258 kernel: HOME=/ Oct 8 20:12:14.970264 kernel: TERM=linux Oct 8 20:12:14.970271 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:12:14.970280 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:12:14.970289 systemd[1]: Detected virtualization kvm. Oct 8 20:12:14.970299 systemd[1]: Detected architecture x86-64. Oct 8 20:12:14.970306 systemd[1]: Running in initrd. Oct 8 20:12:14.970312 systemd[1]: No hostname configured, using default hostname. Oct 8 20:12:14.970319 systemd[1]: Hostname set to . Oct 8 20:12:14.970326 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:12:14.970333 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:12:14.970340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:12:14.970348 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:12:14.970358 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:12:14.970365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:12:14.970372 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:12:14.970379 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:12:14.970388 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:12:14.970395 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:12:14.970402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:12:14.970412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:12:14.970419 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:12:14.970426 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:12:14.970433 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:12:14.970440 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:12:14.970447 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:12:14.970454 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:12:14.970461 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:12:14.970468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:12:14.970478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:12:14.970485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:12:14.970492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:12:14.970499 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:12:14.970506 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:12:14.970513 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:12:14.970520 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:12:14.970527 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:12:14.970536 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:12:14.970543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:12:14.970573 systemd-journald[187]: Collecting audit messages is disabled. Oct 8 20:12:14.970592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:14.970602 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:12:14.970609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:12:14.970615 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:12:14.970623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:12:14.970633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:12:14.970641 systemd-journald[187]: Journal started Oct 8 20:12:14.970657 systemd-journald[187]: Runtime Journal (/run/log/journal/ca587c774a8c4d70ba4c73089ed430c3) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:12:14.944644 systemd-modules-load[188]: Inserted module 'overlay' Oct 8 20:12:15.001701 kernel: Bridge firewalling registered Oct 8 20:12:14.971734 systemd-modules-load[188]: Inserted module 'br_netfilter' Oct 8 20:12:15.009174 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:12:15.009635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:12:15.011176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:15.013752 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:12:15.020311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:12:15.024279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:12:15.033369 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:12:15.037504 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:12:15.040264 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:15.047285 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:12:15.049449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:15.050742 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:12:15.057358 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:12:15.063333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:12:15.065202 dracut-cmdline[216]: dracut-dracut-053 Oct 8 20:12:15.067191 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:12:15.093505 systemd-resolved[226]: Positive Trust Anchors: Oct 8 20:12:15.094279 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:12:15.094316 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:12:15.097469 systemd-resolved[226]: Defaulting to hostname 'linux'. Oct 8 20:12:15.098777 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:12:15.099404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:12:15.143180 kernel: SCSI subsystem initialized Oct 8 20:12:15.152185 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:12:15.162196 kernel: iscsi: registered transport (tcp) Oct 8 20:12:15.180204 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:12:15.180288 kernel: QLogic iSCSI HBA Driver Oct 8 20:12:15.226243 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:12:15.233301 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:12:15.259317 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:12:15.259387 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:12:15.259399 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:12:15.300184 kernel: raid6: avx2x4 gen() 34722 MB/s Oct 8 20:12:15.317178 kernel: raid6: avx2x2 gen() 32037 MB/s Oct 8 20:12:15.334411 kernel: raid6: avx2x1 gen() 24004 MB/s Oct 8 20:12:15.334502 kernel: raid6: using algorithm avx2x4 gen() 34722 MB/s Oct 8 20:12:15.354218 kernel: raid6: .... xor() 4639 MB/s, rmw enabled Oct 8 20:12:15.354301 kernel: raid6: using avx2x2 recovery algorithm Oct 8 20:12:15.373175 kernel: xor: automatically using best checksumming function avx Oct 8 20:12:15.494239 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:12:15.507949 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:12:15.519375 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:12:15.535843 systemd-udevd[406]: Using default interface naming scheme 'v255'. Oct 8 20:12:15.539902 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:12:15.549329 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:12:15.566215 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Oct 8 20:12:15.604365 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:12:15.608309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:12:15.689411 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:12:15.696341 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:12:15.709707 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:12:15.712380 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:12:15.713706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:12:15.714247 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:12:15.723350 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:12:15.739992 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:12:15.771164 kernel: scsi host0: Virtio SCSI HBA Oct 8 20:12:15.778216 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 20:12:15.849943 kernel: ACPI: bus type USB registered Oct 8 20:12:15.849995 kernel: usbcore: registered new interface driver usbfs Oct 8 20:12:15.862179 kernel: usbcore: registered new interface driver hub Oct 8 20:12:15.866175 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:12:15.868799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:12:15.869341 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:15.872271 kernel: libata version 3.00 loaded. Oct 8 20:12:15.870967 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:12:15.875845 kernel: usbcore: registered new device driver usb Oct 8 20:12:15.871426 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:12:15.871536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:15.871996 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:15.881346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:15.885546 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 20:12:15.886063 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 20:12:15.890610 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 20:12:15.890786 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 20:12:15.895161 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:12:15.898160 kernel: AES CTR mode by8 optimization enabled Oct 8 20:12:15.903174 kernel: scsi host1: ahci Oct 8 20:12:15.908160 kernel: scsi host2: ahci Oct 8 20:12:15.911420 kernel: scsi host3: ahci Oct 8 20:12:15.913160 kernel: scsi host4: ahci Oct 8 20:12:15.919162 kernel: scsi host5: ahci Oct 8 20:12:15.923803 kernel: scsi host6: ahci Oct 8 20:12:15.923984 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Oct 8 20:12:15.924003 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Oct 8 20:12:15.924012 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Oct 8 20:12:15.924020 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Oct 8 20:12:15.924031 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Oct 8 20:12:15.924039 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Oct 8 20:12:15.963600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:15.968306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:12:15.981354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:16.240996 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 20:12:16.241075 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 8 20:12:16.241088 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 20:12:16.241099 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 20:12:16.241109 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 20:12:16.241120 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 20:12:16.242175 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 20:12:16.244533 kernel: ata1.00: applying bridge limits Oct 8 20:12:16.245643 kernel: ata1.00: configured for UDMA/100 Oct 8 20:12:16.246393 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 20:12:16.270920 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:12:16.271191 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 20:12:16.271363 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 20:12:16.278937 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:12:16.279488 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 20:12:16.279678 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 20:12:16.284403 kernel: hub 1-0:1.0: USB hub found Oct 8 20:12:16.284642 kernel: hub 1-0:1.0: 4 ports detected Oct 8 20:12:16.290204 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 20:12:16.292177 kernel: hub 2-0:1.0: USB hub found Oct 8 20:12:16.292420 kernel: hub 2-0:1.0: 4 ports detected Oct 8 20:12:16.301772 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 20:12:16.301999 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:12:16.303920 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 8 20:12:16.306633 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 20:12:16.306804 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 8 20:12:16.306942 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 8 20:12:16.307073 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 20:12:16.315745 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:12:16.315778 kernel: GPT:17805311 != 80003071 Oct 8 20:12:16.317318 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:12:16.317339 kernel: GPT:17805311 != 80003071 Oct 8 20:12:16.318350 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:12:16.319501 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:12:16.321169 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 8 20:12:16.321348 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 8 20:12:16.363169 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (459) Oct 8 20:12:16.363226 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (453) Oct 8 20:12:16.360794 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 20:12:16.368067 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 20:12:16.377080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 20:12:16.377630 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 20:12:16.388637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:12:16.394314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:12:16.399631 disk-uuid[579]: Primary Header is updated. Oct 8 20:12:16.399631 disk-uuid[579]: Secondary Entries is updated. Oct 8 20:12:16.399631 disk-uuid[579]: Secondary Header is updated. Oct 8 20:12:16.406169 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:12:16.413197 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:12:16.529173 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 20:12:16.665173 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:12:16.669374 kernel: usbcore: registered new interface driver usbhid Oct 8 20:12:16.669413 kernel: usbhid: USB HID core driver Oct 8 20:12:16.676516 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 8 20:12:16.676552 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 20:12:17.415193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:12:17.415947 disk-uuid[581]: The operation has completed successfully. Oct 8 20:12:17.462062 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:12:17.462235 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:12:17.477336 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:12:17.482747 sh[597]: Success Oct 8 20:12:17.496169 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 20:12:17.537374 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:12:17.545219 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:12:17.545858 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:12:17.570161 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:12:17.570205 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:12:17.573332 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:12:17.573353 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:12:17.574584 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:12:17.582157 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 20:12:17.583652 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:12:17.584726 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:12:17.590282 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:12:17.594253 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:12:17.605184 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:12:17.605216 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:12:17.605226 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:12:17.609223 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:12:17.609252 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:12:17.622394 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:12:17.621959 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:12:17.628867 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:12:17.633313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:12:17.679783 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:12:17.691615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:12:17.713621 systemd-networkd[778]: lo: Link UP Oct 8 20:12:17.713633 systemd-networkd[778]: lo: Gained carrier Oct 8 20:12:17.716414 systemd-networkd[778]: Enumeration completed Oct 8 20:12:17.716503 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:12:17.717799 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:17.717803 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:17.718705 systemd[1]: Reached target network.target - Network. Oct 8 20:12:17.719682 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:17.723221 ignition[711]: Ignition 2.19.0 Oct 8 20:12:17.719685 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:17.723228 ignition[711]: Stage: fetch-offline Oct 8 20:12:17.720287 systemd-networkd[778]: eth0: Link UP Oct 8 20:12:17.723259 ignition[711]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:17.720291 systemd-networkd[778]: eth0: Gained carrier Oct 8 20:12:17.723272 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:17.720297 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:17.723361 ignition[711]: parsed url from cmdline: "" Oct 8 20:12:17.725735 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:12:17.723365 ignition[711]: no config URL provided Oct 8 20:12:17.727124 systemd-networkd[778]: eth1: Link UP Oct 8 20:12:17.723370 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:12:17.727128 systemd-networkd[778]: eth1: Gained carrier Oct 8 20:12:17.723378 ignition[711]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:12:17.727136 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:17.723386 ignition[711]: failed to fetch config: resource requires networking Oct 8 20:12:17.733182 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:12:17.723552 ignition[711]: Ignition finished successfully Oct 8 20:12:17.744328 ignition[785]: Ignition 2.19.0 Oct 8 20:12:17.744337 ignition[785]: Stage: fetch Oct 8 20:12:17.744483 ignition[785]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:17.744493 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:17.744562 ignition[785]: parsed url from cmdline: "" Oct 8 20:12:17.744566 ignition[785]: no config URL provided Oct 8 20:12:17.744571 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:12:17.744579 ignition[785]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:12:17.744594 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 20:12:17.744733 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 20:12:17.770192 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:12:17.864250 systemd-networkd[778]: eth0: DHCPv4 address 188.245.175.191/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:12:17.945467 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 20:12:17.948551 ignition[785]: GET result: OK Oct 8 20:12:17.948623 ignition[785]: parsing config with SHA512: e4029be9a68d4b74d3adecf7eda7b1a60fee1452878c62ef90fbd84ced803a31ddac2b79fc89e1bd5017e2d6a03a658484a98da5660b8cc56452f35ef5508fcd Oct 8 20:12:17.952372 unknown[785]: fetched base config from "system" Oct 8 20:12:17.952383 unknown[785]: fetched base config from "system" Oct 8 20:12:17.952786 ignition[785]: fetch: fetch complete Oct 8 20:12:17.952390 unknown[785]: fetched user config from "hetzner" Oct 8 20:12:17.952794 ignition[785]: fetch: fetch passed Oct 8 20:12:17.952847 ignition[785]: Ignition finished successfully Oct 8 20:12:17.956239 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:12:17.961367 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:12:17.976160 ignition[792]: Ignition 2.19.0 Oct 8 20:12:17.976175 ignition[792]: Stage: kargs Oct 8 20:12:17.976351 ignition[792]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:17.976364 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:17.977183 ignition[792]: kargs: kargs passed Oct 8 20:12:17.978597 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:12:17.977230 ignition[792]: Ignition finished successfully Oct 8 20:12:17.989288 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:12:18.002735 ignition[798]: Ignition 2.19.0 Oct 8 20:12:18.002746 ignition[798]: Stage: disks Oct 8 20:12:18.002876 ignition[798]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:18.005328 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:12:18.002887 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:18.006474 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:12:18.003649 ignition[798]: disks: disks passed Oct 8 20:12:18.007883 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:12:18.003698 ignition[798]: Ignition finished successfully Oct 8 20:12:18.009109 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:12:18.010331 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:12:18.011231 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:12:18.019290 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:12:18.033862 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:12:18.037708 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:12:18.043240 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:12:18.121181 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:12:18.121328 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:12:18.122314 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:12:18.128204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:12:18.130234 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:12:18.135565 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 20:12:18.138311 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:12:18.139342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:12:18.141416 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:12:18.148374 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Oct 8 20:12:18.148395 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:12:18.151672 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:12:18.151697 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:12:18.151838 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:12:18.160613 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:12:18.160673 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:12:18.163748 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:12:18.205040 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:12:18.206048 coreos-metadata[817]: Oct 08 20:12:18.205 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 20:12:18.207329 coreos-metadata[817]: Oct 08 20:12:18.207 INFO Fetch successful Oct 8 20:12:18.207329 coreos-metadata[817]: Oct 08 20:12:18.207 INFO wrote hostname ci-4081-1-0-f-c5c751ca26 to /sysroot/etc/hostname Oct 8 20:12:18.209516 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:12:18.211737 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:12:18.216059 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:12:18.219970 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:12:18.308518 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:12:18.314252 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:12:18.319312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:12:18.323161 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:12:18.349022 ignition[931]: INFO : Ignition 2.19.0 Oct 8 20:12:18.350028 ignition[931]: INFO : Stage: mount Oct 8 20:12:18.350766 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:18.350766 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:18.353875 ignition[931]: INFO : mount: mount passed Oct 8 20:12:18.353875 ignition[931]: INFO : Ignition finished successfully Oct 8 20:12:18.353162 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:12:18.359362 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:12:18.361564 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:12:18.569038 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:12:18.574373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:12:18.585469 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (945) Oct 8 20:12:18.585517 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:12:18.588106 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:12:18.588156 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:12:18.593395 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:12:18.593425 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:12:18.595825 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:12:18.612967 ignition[962]: INFO : Ignition 2.19.0 Oct 8 20:12:18.614236 ignition[962]: INFO : Stage: files Oct 8 20:12:18.614236 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:18.614236 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:18.616178 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:12:18.616178 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:12:18.616178 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:12:18.619331 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:12:18.619982 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:12:18.619982 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:12:18.619750 unknown[962]: wrote ssh authorized keys file for user: core Oct 8 20:12:18.621904 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:12:18.621904 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:12:18.715688 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:12:19.020327 systemd-networkd[778]: eth0: Gained IPv6LL Oct 8 20:12:19.398197 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:12:19.398197 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:12:19.400070 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 8 20:12:19.699165 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:12:19.724282 systemd-networkd[778]: eth1: Gained IPv6LL Oct 8 20:12:19.969072 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:12:19.969072 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:12:19.972354 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:12:19.972354 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:12:19.972354 ignition[962]: INFO : files: files passed Oct 8 20:12:19.972354 ignition[962]: INFO : Ignition finished successfully Oct 8 20:12:19.973273 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:12:19.980294 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:12:19.984884 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:12:19.986361 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:12:19.986947 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:12:19.998661 initrd-setup-root-after-ignition[991]: grep: Oct 8 20:12:19.998661 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:12:20.000338 initrd-setup-root-after-ignition[991]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:12:20.000338 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:12:20.000764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:12:20.002398 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:12:20.009330 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:12:20.031710 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:12:20.031829 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:12:20.033036 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:12:20.033978 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:12:20.035069 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:12:20.045406 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:12:20.058068 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:12:20.063306 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:12:20.073397 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:12:20.074624 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:12:20.075940 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:12:20.076481 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:12:20.076629 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:12:20.077955 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:12:20.078638 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:12:20.079669 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:12:20.080571 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:12:20.081509 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:12:20.082552 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:12:20.083590 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:12:20.084631 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:12:20.085669 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:12:20.086727 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:12:20.087695 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:12:20.087820 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:12:20.089264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:12:20.090333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:12:20.091393 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:12:20.091516 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:12:20.092481 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:12:20.092590 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:12:20.093930 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:12:20.094039 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:12:20.095322 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:12:20.095484 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:12:20.096236 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 20:12:20.096369 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:12:20.113444 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:12:20.115542 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:12:20.116027 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:12:20.117403 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:12:20.118006 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:12:20.118107 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:12:20.128542 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:12:20.129217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:12:20.130832 ignition[1015]: INFO : Ignition 2.19.0 Oct 8 20:12:20.130832 ignition[1015]: INFO : Stage: umount Oct 8 20:12:20.130832 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:20.130832 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:20.139877 ignition[1015]: INFO : umount: umount passed Oct 8 20:12:20.139877 ignition[1015]: INFO : Ignition finished successfully Oct 8 20:12:20.136420 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:12:20.136529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:12:20.142376 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:12:20.142482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:12:20.142952 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:12:20.143003 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:12:20.143491 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:12:20.143551 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:12:20.144217 systemd[1]: Stopped target network.target - Network. Oct 8 20:12:20.144595 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:12:20.144646 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:12:20.145109 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:12:20.148219 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:12:20.153230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:12:20.155683 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:12:20.156662 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:12:20.157896 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:12:20.157945 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:12:20.158451 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:12:20.158494 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:12:20.158935 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:12:20.158985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:12:20.163200 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:12:20.163251 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:12:20.167438 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:12:20.171883 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:12:20.173427 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:12:20.174207 systemd-networkd[778]: eth1: DHCPv6 lease lost Oct 8 20:12:20.178213 systemd-networkd[778]: eth0: DHCPv6 lease lost Oct 8 20:12:20.180154 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:12:20.180330 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:12:20.183303 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:12:20.183962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:12:20.185658 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:12:20.185785 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:12:20.187819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:12:20.187873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:12:20.188859 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:12:20.188909 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:12:20.194237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:12:20.194755 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:12:20.194817 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:12:20.197327 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:12:20.197385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:20.198370 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:12:20.198421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:12:20.199409 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:12:20.199454 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:12:20.200530 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:12:20.211926 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:12:20.212155 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:12:20.213685 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:12:20.213789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:12:20.215127 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:12:20.215238 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:12:20.216293 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:12:20.216347 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:12:20.217460 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:12:20.217515 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:12:20.219035 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:12:20.219083 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:12:20.220069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:12:20.220116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:20.226367 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:12:20.226894 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:12:20.226955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:12:20.230389 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 20:12:20.230443 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:12:20.230946 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:12:20.230990 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:12:20.231495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:12:20.231537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:20.236877 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:12:20.237032 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:12:20.238865 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:12:20.254301 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:12:20.261740 systemd[1]: Switching root. Oct 8 20:12:20.291494 systemd-journald[187]: Journal stopped Oct 8 20:12:21.276169 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Oct 8 20:12:21.278214 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:12:21.278237 kernel: SELinux: policy capability open_perms=1 Oct 8 20:12:21.278247 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:12:21.278262 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:12:21.278273 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:12:21.278288 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:12:21.278297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:12:21.278307 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:12:21.278321 kernel: audit: type=1403 audit(1728418340.414:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:12:21.278332 systemd[1]: Successfully loaded SELinux policy in 42.784ms. Oct 8 20:12:21.278345 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.234ms. Oct 8 20:12:21.278356 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:12:21.278369 systemd[1]: Detected virtualization kvm. Oct 8 20:12:21.278380 systemd[1]: Detected architecture x86-64. Oct 8 20:12:21.278390 systemd[1]: Detected first boot. Oct 8 20:12:21.278405 systemd[1]: Hostname set to . Oct 8 20:12:21.278416 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:12:21.278426 zram_generator::config[1057]: No configuration found. Oct 8 20:12:21.278438 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:12:21.278448 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:12:21.278460 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:12:21.278471 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:12:21.278482 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:12:21.278492 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:12:21.278502 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:12:21.278513 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:12:21.278523 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:12:21.278534 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:12:21.278544 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:12:21.278557 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:12:21.278590 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:12:21.278602 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:12:21.278613 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:12:21.278623 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:12:21.278634 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:12:21.278644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:12:21.278654 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:12:21.278667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:12:21.278678 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:12:21.278688 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:12:21.278699 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:12:21.278709 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:12:21.278720 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:12:21.278731 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:12:21.278743 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:12:21.278754 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:12:21.278764 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:12:21.278774 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:12:21.278784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:12:21.278795 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:12:21.278805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:12:21.278815 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:12:21.278826 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:12:21.278838 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:12:21.278853 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:12:21.278866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:12:21.278876 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:12:21.278887 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:12:21.278897 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:12:21.278909 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:12:21.278920 systemd[1]: Reached target machines.target - Containers. Oct 8 20:12:21.278931 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:12:21.278942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:12:21.278952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:12:21.278962 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:12:21.278972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:12:21.278982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:12:21.278995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:12:21.279005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:12:21.279015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:12:21.279025 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:12:21.279036 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:12:21.279046 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:12:21.279062 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:12:21.279072 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:12:21.279084 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:12:21.279095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:12:21.279105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:12:21.279116 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:12:21.279127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:12:21.282513 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:12:21.282538 kernel: loop: module loaded Oct 8 20:12:21.282552 systemd[1]: Stopped verity-setup.service. Oct 8 20:12:21.282563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:12:21.282578 kernel: fuse: init (API version 7.39) Oct 8 20:12:21.282588 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:12:21.282598 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:12:21.282608 kernel: ACPI: bus type drm_connector registered Oct 8 20:12:21.282618 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:12:21.282628 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:12:21.282641 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:12:21.282651 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:12:21.282662 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:12:21.282672 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:12:21.282682 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:12:21.282692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:12:21.282703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:12:21.282735 systemd-journald[1133]: Collecting audit messages is disabled. Oct 8 20:12:21.282755 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:12:21.282766 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:12:21.282776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:12:21.282787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:12:21.282800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:12:21.282810 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:12:21.282820 systemd-journald[1133]: Journal started Oct 8 20:12:21.282839 systemd-journald[1133]: Runtime Journal (/run/log/journal/ca587c774a8c4d70ba4c73089ed430c3) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:12:20.956226 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:12:21.284596 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:12:20.980744 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 20:12:20.981425 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:12:21.286852 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:12:21.288853 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:12:21.289058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:12:21.290184 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:12:21.290994 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:12:21.291914 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:12:21.306827 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:12:21.313695 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:12:21.318224 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:12:21.319516 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:12:21.320344 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:12:21.321675 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:12:21.324847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:12:21.333241 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:12:21.334430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:12:21.341240 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:12:21.344282 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:12:21.345053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:12:21.351633 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:12:21.352415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:12:21.357864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:12:21.368659 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:12:21.377737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:12:21.381616 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:12:21.382948 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:12:21.384805 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:12:21.401256 systemd-journald[1133]: Time spent on flushing to /var/log/journal/ca587c774a8c4d70ba4c73089ed430c3 is 53.212ms for 1136 entries. Oct 8 20:12:21.401256 systemd-journald[1133]: System Journal (/var/log/journal/ca587c774a8c4d70ba4c73089ed430c3) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:12:21.491747 systemd-journald[1133]: Received client request to flush runtime journal. Oct 8 20:12:21.491944 kernel: loop0: detected capacity change from 0 to 8 Oct 8 20:12:21.491976 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:12:21.491995 kernel: loop1: detected capacity change from 0 to 140768 Oct 8 20:12:21.420554 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:12:21.421211 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:12:21.431202 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:12:21.437644 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 8 20:12:21.437656 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 8 20:12:21.441284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:12:21.456531 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:12:21.458783 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:12:21.470759 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:12:21.482609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:21.488582 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 20:12:21.497512 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:12:21.510472 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:12:21.513222 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:12:21.539750 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:12:21.544187 kernel: loop2: detected capacity change from 0 to 210664 Oct 8 20:12:21.554336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:12:21.586673 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Oct 8 20:12:21.587092 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Oct 8 20:12:21.592785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:12:21.603187 kernel: loop3: detected capacity change from 0 to 142488 Oct 8 20:12:21.646708 kernel: loop4: detected capacity change from 0 to 8 Oct 8 20:12:21.649189 kernel: loop5: detected capacity change from 0 to 140768 Oct 8 20:12:21.670202 kernel: loop6: detected capacity change from 0 to 210664 Oct 8 20:12:21.691059 kernel: loop7: detected capacity change from 0 to 142488 Oct 8 20:12:21.709755 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 20:12:21.712014 (sd-merge)[1205]: Merged extensions into '/usr'. Oct 8 20:12:21.720795 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:12:21.720812 systemd[1]: Reloading... Oct 8 20:12:21.813166 zram_generator::config[1230]: No configuration found. Oct 8 20:12:21.905817 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:12:21.977953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:12:22.023101 systemd[1]: Reloading finished in 301 ms. Oct 8 20:12:22.061736 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:12:22.064359 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:12:22.075387 systemd[1]: Starting ensure-sysext.service... Oct 8 20:12:22.078404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:12:22.079606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:12:22.088431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:12:22.095125 systemd[1]: Reloading requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:12:22.095174 systemd[1]: Reloading... Oct 8 20:12:22.123670 systemd-udevd[1277]: Using default interface naming scheme 'v255'. Oct 8 20:12:22.128542 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:12:22.129746 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:12:22.133668 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:12:22.135481 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Oct 8 20:12:22.135585 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Oct 8 20:12:22.141358 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:12:22.141841 systemd-tmpfiles[1275]: Skipping /boot Oct 8 20:12:22.168724 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:12:22.168743 systemd-tmpfiles[1275]: Skipping /boot Oct 8 20:12:22.173224 zram_generator::config[1301]: No configuration found. Oct 8 20:12:22.270492 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1336) Oct 8 20:12:22.275186 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1336) Oct 8 20:12:22.364446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:12:22.406862 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1331) Oct 8 20:12:22.412302 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 20:12:22.413402 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:12:22.414028 systemd[1]: Reloading finished in 318 ms. Oct 8 20:12:22.430782 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:12:22.431836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:12:22.450173 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:12:22.480181 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:12:22.480367 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:12:22.486382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:12:22.490347 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:12:22.490940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:12:22.499503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:12:22.501596 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:12:22.506329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:12:22.508402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:12:22.509001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:12:22.510694 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:12:22.521395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:12:22.529933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:12:22.540409 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:12:22.541008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:12:22.543131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:12:22.544160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:12:22.562224 systemd[1]: Finished ensure-sysext.service. Oct 8 20:12:22.564049 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 8 20:12:22.572427 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 20:12:22.572692 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 20:12:22.601772 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 20:12:22.571218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:12:22.571400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:12:22.575341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:12:22.576306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:12:22.585391 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:12:22.591312 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:12:22.592246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:12:22.594182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:12:22.594361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:12:22.595032 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:12:22.607306 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:12:22.610950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:12:22.611127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:12:22.614559 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:12:22.614741 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:12:22.616487 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:12:22.617175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:12:22.619271 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:12:22.623625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:12:22.653180 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 20:12:22.654316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:12:22.654884 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:12:22.654920 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:12:22.658039 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:12:22.666229 augenrules[1418]: No rules Oct 8 20:12:22.670510 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:12:22.682873 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:12:22.696174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:12:22.700073 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:12:22.703171 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 8 20:12:22.712988 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:12:22.719237 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 8 20:12:22.723179 kernel: Console: switching to colour dummy device 80x25 Oct 8 20:12:22.725346 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 20:12:22.725391 kernel: [drm] features: -context_init Oct 8 20:12:22.725410 kernel: [drm] number of scanouts: 1 Oct 8 20:12:22.726308 kernel: [drm] number of cap sets: 0 Oct 8 20:12:22.728278 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 20:12:22.736482 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 8 20:12:22.736530 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 20:12:22.757183 kernel: EDAC MC: Ver: 3.0.0 Oct 8 20:12:22.800706 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 20:12:22.807409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:22.848980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:12:22.849268 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:22.853709 systemd-networkd[1392]: lo: Link UP Oct 8 20:12:22.853720 systemd-networkd[1392]: lo: Gained carrier Oct 8 20:12:22.856523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:22.858574 systemd-networkd[1392]: Enumeration completed Oct 8 20:12:22.858917 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:12:22.861269 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:22.861273 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:22.864821 systemd-networkd[1392]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:22.864835 systemd-networkd[1392]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:22.866423 systemd-networkd[1392]: eth0: Link UP Oct 8 20:12:22.866436 systemd-networkd[1392]: eth0: Gained carrier Oct 8 20:12:22.866451 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:22.867081 systemd-resolved[1393]: Positive Trust Anchors: Oct 8 20:12:22.867100 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:12:22.867129 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:12:22.869411 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:12:22.874997 systemd-networkd[1392]: eth1: Link UP Oct 8 20:12:22.875007 systemd-networkd[1392]: eth1: Gained carrier Oct 8 20:12:22.875028 systemd-networkd[1392]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:22.877711 systemd-resolved[1393]: Using system hostname 'ci-4081-1-0-f-c5c751ca26'. Oct 8 20:12:22.881918 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:12:22.882087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:12:22.882353 systemd[1]: Reached target network.target - Network. Oct 8 20:12:22.884261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:12:22.884329 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:12:22.909262 systemd-networkd[1392]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:12:22.910342 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Oct 8 20:12:22.935329 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:12:22.937573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:22.954723 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:12:22.967336 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:12:22.998643 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:12:23.000625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:12:23.000747 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:12:23.000977 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:12:23.001131 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:12:23.001485 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:12:23.001851 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:12:23.002552 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:12:23.002717 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:12:23.002746 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:12:23.003485 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:12:23.005234 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:12:23.008068 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:12:23.013223 systemd-networkd[1392]: eth0: DHCPv4 address 188.245.175.191/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:12:23.013713 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Oct 8 20:12:23.014991 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Oct 8 20:12:23.015109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:12:23.024374 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:12:23.025573 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:12:23.028609 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:12:23.029365 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:12:23.030228 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:12:23.030763 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:12:23.030796 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:12:23.037329 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:12:23.041331 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:12:23.047306 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:12:23.054122 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:12:23.057294 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:12:23.060630 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:12:23.066368 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:12:23.073259 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:12:23.076130 jq[1462]: false Oct 8 20:12:23.082304 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 8 20:12:23.089300 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:12:23.094441 coreos-metadata[1458]: Oct 08 20:12:23.094 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 20:12:23.096612 coreos-metadata[1458]: Oct 08 20:12:23.096 INFO Fetch successful Oct 8 20:12:23.096758 coreos-metadata[1458]: Oct 08 20:12:23.096 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 20:12:23.098261 coreos-metadata[1458]: Oct 08 20:12:23.098 INFO Fetch successful Oct 8 20:12:23.098335 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:12:23.109228 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:12:23.111567 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:12:23.112071 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:12:23.116340 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:12:23.125387 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:12:23.129868 dbus-daemon[1459]: [system] SELinux support is enabled Oct 8 20:12:23.128244 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:12:23.131499 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:12:23.144329 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:12:23.144899 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:12:23.145518 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:12:23.146492 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:12:23.161176 extend-filesystems[1463]: Found loop4 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found loop5 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found loop6 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found loop7 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda1 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda2 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda3 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found usr Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda4 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda6 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda7 Oct 8 20:12:23.161176 extend-filesystems[1463]: Found sda9 Oct 8 20:12:23.161176 extend-filesystems[1463]: Checking size of /dev/sda9 Oct 8 20:12:23.157543 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:12:23.241523 jq[1479]: true Oct 8 20:12:23.241711 update_engine[1475]: I20241008 20:12:23.239406 1475 main.cc:92] Flatcar Update Engine starting Oct 8 20:12:23.158370 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:12:23.197281 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:12:23.203195 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:12:23.203252 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:12:23.203957 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:12:23.203977 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:12:23.272077 extend-filesystems[1463]: Resized partition /dev/sda9 Oct 8 20:12:23.273291 update_engine[1475]: I20241008 20:12:23.270709 1475 update_check_scheduler.cc:74] Next update check in 5m22s Oct 8 20:12:23.267815 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:12:23.276307 tar[1484]: linux-amd64/helm Oct 8 20:12:23.276594 jq[1495]: true Oct 8 20:12:23.284263 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:12:23.301236 extend-filesystems[1506]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:12:23.307950 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:12:23.308691 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:12:23.317736 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 20:12:23.318124 systemd-logind[1471]: New seat seat0. Oct 8 20:12:23.324993 systemd-logind[1471]: Watching system buttons on /dev/input/event2 (Power Button) Oct 8 20:12:23.325012 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:12:23.329361 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:12:23.408408 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:12:23.419944 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1330) Oct 8 20:12:23.431437 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:12:23.432485 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:12:23.448501 systemd[1]: Starting sshkeys.service... Oct 8 20:12:23.469223 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 20:12:23.482747 extend-filesystems[1506]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 20:12:23.482747 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 20:12:23.482747 extend-filesystems[1506]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 20:12:23.489107 extend-filesystems[1463]: Resized filesystem in /dev/sda9 Oct 8 20:12:23.489107 extend-filesystems[1463]: Found sr0 Oct 8 20:12:23.486351 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:12:23.486545 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:12:23.499743 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:12:23.508415 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:12:23.552889 coreos-metadata[1542]: Oct 08 20:12:23.552 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 20:12:23.554400 coreos-metadata[1542]: Oct 08 20:12:23.554 INFO Fetch successful Oct 8 20:12:23.560104 unknown[1542]: wrote ssh authorized keys file for user: core Oct 8 20:12:23.588436 update-ssh-keys[1545]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:12:23.590194 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:12:23.593709 containerd[1488]: time="2024-10-08T20:12:23.591341373Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:12:23.596707 systemd[1]: Finished sshkeys.service. Oct 8 20:12:23.611330 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:12:23.625055 containerd[1488]: time="2024-10-08T20:12:23.624977558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632009 containerd[1488]: time="2024-10-08T20:12:23.631964102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632009 containerd[1488]: time="2024-10-08T20:12:23.632000980Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:12:23.632093 containerd[1488]: time="2024-10-08T20:12:23.632018232Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:12:23.632248 containerd[1488]: time="2024-10-08T20:12:23.632220192Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:12:23.632248 containerd[1488]: time="2024-10-08T20:12:23.632246340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632365 containerd[1488]: time="2024-10-08T20:12:23.632335898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632365 containerd[1488]: time="2024-10-08T20:12:23.632358892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632576 containerd[1488]: time="2024-10-08T20:12:23.632549419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632576 containerd[1488]: time="2024-10-08T20:12:23.632572111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632621 containerd[1488]: time="2024-10-08T20:12:23.632585015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632621 containerd[1488]: time="2024-10-08T20:12:23.632594714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632703 containerd[1488]: time="2024-10-08T20:12:23.632681617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:23.632936 containerd[1488]: time="2024-10-08T20:12:23.632912540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:23.633055 containerd[1488]: time="2024-10-08T20:12:23.633030992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:23.633055 containerd[1488]: time="2024-10-08T20:12:23.633050679Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:12:23.634801 containerd[1488]: time="2024-10-08T20:12:23.634534683Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:12:23.634801 containerd[1488]: time="2024-10-08T20:12:23.634662181Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:12:23.640721 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646414423Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646470348Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646499432Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646514311Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646528357Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646674220Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646864417Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646976206Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.646990614Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.647002476Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.647015239Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.647027022Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.647037251Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650259 containerd[1488]: time="2024-10-08T20:12:23.647049364Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647062759Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647074000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647089199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647099608Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647123012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647134644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647173527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647185839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647196360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647218962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647229471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647240221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647251152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650569 containerd[1488]: time="2024-10-08T20:12:23.647265058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647276119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647288963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647303691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647316324Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647342103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647353053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647362321Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647419067Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647436029Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647452730Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647472046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647488207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647500309Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:12:23.650780 containerd[1488]: time="2024-10-08T20:12:23.647510358Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:12:23.651009 containerd[1488]: time="2024-10-08T20:12:23.647520968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:12:23.651084 containerd[1488]: time="2024-10-08T20:12:23.647746731Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:12:23.651084 containerd[1488]: time="2024-10-08T20:12:23.647796615Z" level=info msg="Connect containerd service" Oct 8 20:12:23.651084 containerd[1488]: time="2024-10-08T20:12:23.647827774Z" level=info msg="using legacy CRI server" Oct 8 20:12:23.651084 containerd[1488]: time="2024-10-08T20:12:23.647835558Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:12:23.651084 containerd[1488]: time="2024-10-08T20:12:23.647917191Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:12:23.651513 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657211162Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657378827Z" level=info msg="Start subscribing containerd event" Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657446483Z" level=info msg="Start recovering state" Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657552923Z" level=info msg="Start event monitor" Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657578111Z" level=info msg="Start snapshots syncer" Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657614890Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657624227Z" level=info msg="Start streaming server" Oct 8 20:12:23.657729 containerd[1488]: time="2024-10-08T20:12:23.657676505Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:12:23.657937 containerd[1488]: time="2024-10-08T20:12:23.657921775Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:12:23.658118 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:12:23.660002 containerd[1488]: time="2024-10-08T20:12:23.658334640Z" level=info msg="containerd successfully booted in 0.068937s" Oct 8 20:12:23.675743 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:12:23.676082 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:12:23.689477 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:12:23.701004 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:12:23.710732 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:12:23.715257 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:12:23.720028 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:12:23.899424 tar[1484]: linux-amd64/LICENSE Oct 8 20:12:23.899424 tar[1484]: linux-amd64/README.md Oct 8 20:12:23.911115 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:12:23.948321 systemd-networkd[1392]: eth1: Gained IPv6LL Oct 8 20:12:23.948974 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Oct 8 20:12:23.951740 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:12:23.957170 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:12:23.967682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:23.972465 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:12:24.003350 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:12:24.460303 systemd-networkd[1392]: eth0: Gained IPv6LL Oct 8 20:12:24.460999 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Oct 8 20:12:24.651590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:24.653118 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:12:24.657598 systemd[1]: Startup finished in 1.216s (kernel) + 5.695s (initrd) + 4.284s (userspace) = 11.196s. Oct 8 20:12:24.661493 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:25.147338 kubelet[1589]: E1008 20:12:25.147240 1589 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:25.150509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:25.150814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:12:35.401186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:12:35.408635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:35.538006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:35.544399 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:35.588253 kubelet[1610]: E1008 20:12:35.588155 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:35.594763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:35.594949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:12:45.675106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:12:45.680302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:45.800761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:45.805034 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:45.841772 kubelet[1626]: E1008 20:12:45.841716 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:45.845327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:45.845513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:12:55.592027 systemd-timesyncd[1400]: Contacted time server 194.50.19.204:123 (2.flatcar.pool.ntp.org). Oct 8 20:12:55.592082 systemd-timesyncd[1400]: Initial clock synchronization to Tue 2024-10-08 20:12:55.591851 UTC. Oct 8 20:12:55.592469 systemd-resolved[1393]: Clock change detected. Flushing caches. Oct 8 20:12:56.735903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:12:56.741031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:56.864120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:56.867777 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:56.899587 kubelet[1642]: E1008 20:12:56.899541 1642 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:56.903242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:56.903423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:06.985959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:13:06.991010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:07.122009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:07.123134 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:07.159826 kubelet[1658]: E1008 20:13:07.159718 1658 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:07.163716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:07.163970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:09.302636 update_engine[1475]: I20241008 20:13:09.302527 1475 update_attempter.cc:509] Updating boot flags... Oct 8 20:13:09.338929 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1675) Oct 8 20:13:09.394917 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1677) Oct 8 20:13:09.447605 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1677) Oct 8 20:13:17.235959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 20:13:17.242066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:17.380667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:17.385511 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:17.423036 kubelet[1695]: E1008 20:13:17.422966 1695 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:17.426534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:17.426712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:27.485944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 20:13:27.491059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:27.628166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:27.632223 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:27.666844 kubelet[1711]: E1008 20:13:27.666749 1711 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:27.670015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:27.670229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:37.735907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 20:13:37.741112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:37.879227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:37.883801 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:37.921939 kubelet[1727]: E1008 20:13:37.921855 1727 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:37.925689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:37.925910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:47.985776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 20:13:47.991010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:48.117812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:48.121980 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:48.157426 kubelet[1743]: E1008 20:13:48.157355 1743 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:48.160778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:48.161023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:58.235901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 20:13:58.242051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:58.387238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:58.391223 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:58.429803 kubelet[1759]: E1008 20:13:58.429711 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:58.433095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:58.433278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:08.485996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 20:14:08.491417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:08.611014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:08.611796 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:08.643300 kubelet[1775]: E1008 20:14:08.643242 1775 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:08.646517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:08.646699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:18.735780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 20:14:18.741033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:18.878015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:18.879127 (kubelet)[1791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:18.912988 kubelet[1791]: E1008 20:14:18.912932 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:18.916759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:18.916974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:28.985906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 20:14:28.991047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:29.145620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:29.169382 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:29.208305 kubelet[1807]: E1008 20:14:29.208242 1807 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:29.211674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:29.211890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:39.236171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 20:14:39.244033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:39.380646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:39.384658 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:39.419098 kubelet[1823]: E1008 20:14:39.419034 1823 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:39.422792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:39.423001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:49.485789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Oct 8 20:14:49.491028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:49.624366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:49.641177 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:49.677798 kubelet[1840]: E1008 20:14:49.677747 1840 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:49.680917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:49.681103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:59.736081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Oct 8 20:14:59.749090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:59.890603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:59.902219 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:59.943314 kubelet[1857]: E1008 20:14:59.943243 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:59.947463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:59.947648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:15:09.985842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Oct 8 20:15:09.992317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:15:10.141040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:15:10.141234 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:15:10.174586 kubelet[1873]: E1008 20:15:10.174530 1873 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:15:10.178189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:15:10.178375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:15:20.235970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Oct 8 20:15:20.241106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:15:20.363815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:15:20.368665 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:15:20.408310 kubelet[1889]: E1008 20:15:20.408221 1889 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:15:20.411718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:15:20.411964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:15:30.486211 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Oct 8 20:15:30.499552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:15:30.693025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:15:30.693740 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:15:30.735138 kubelet[1905]: E1008 20:15:30.735079 1905 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:15:30.739355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:15:30.739594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:15:40.985815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Oct 8 20:15:40.991244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:15:41.111605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:15:41.115560 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:15:41.156769 kubelet[1921]: E1008 20:15:41.156711 1921 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:15:41.160508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:15:41.160693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:15:51.235905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Oct 8 20:15:51.241043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:15:51.368821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:15:51.373223 (kubelet)[1936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:15:51.410100 kubelet[1936]: E1008 20:15:51.410035 1936 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:15:51.414032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:15:51.414220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:16:01.485847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Oct 8 20:16:01.492118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:01.624253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:01.635188 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:16:01.674063 kubelet[1952]: E1008 20:16:01.673992 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:16:01.676925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:16:01.677159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:16:11.735850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Oct 8 20:16:11.741020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:11.870690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:11.875046 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:16:11.918459 kubelet[1968]: E1008 20:16:11.918414 1968 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:16:11.922918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:16:11.923152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:16:21.985809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Oct 8 20:16:21.991236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:22.114602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:22.118523 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:16:22.151931 kubelet[1985]: E1008 20:16:22.151881 1985 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:16:22.155258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:16:22.155456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:16:31.871143 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:16:31.879205 systemd[1]: Started sshd@0-188.245.175.191:22-147.75.109.163:58846.service - OpenSSH per-connection server daemon (147.75.109.163:58846). Oct 8 20:16:32.235720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Oct 8 20:16:32.240998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:32.373375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:32.386144 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:16:32.422165 kubelet[2004]: E1008 20:16:32.422106 2004 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:16:32.426032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:16:32.426278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:16:32.858194 sshd[1994]: Accepted publickey for core from 147.75.109.163 port 58846 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:32.860502 sshd[1994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:32.870078 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:16:32.875138 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:16:32.878171 systemd-logind[1471]: New session 1 of user core. Oct 8 20:16:32.889999 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:16:32.896109 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:16:32.901411 (systemd)[2014]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:16:32.998674 systemd[2014]: Queued start job for default target default.target. Oct 8 20:16:33.004124 systemd[2014]: Created slice app.slice - User Application Slice. Oct 8 20:16:33.004151 systemd[2014]: Reached target paths.target - Paths. Oct 8 20:16:33.004164 systemd[2014]: Reached target timers.target - Timers. Oct 8 20:16:33.005568 systemd[2014]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:16:33.018080 systemd[2014]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:16:33.018197 systemd[2014]: Reached target sockets.target - Sockets. Oct 8 20:16:33.018211 systemd[2014]: Reached target basic.target - Basic System. Oct 8 20:16:33.018250 systemd[2014]: Reached target default.target - Main User Target. Oct 8 20:16:33.018282 systemd[2014]: Startup finished in 109ms. Oct 8 20:16:33.018634 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:16:33.035988 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:16:33.723143 systemd[1]: Started sshd@1-188.245.175.191:22-147.75.109.163:58854.service - OpenSSH per-connection server daemon (147.75.109.163:58854). Oct 8 20:16:34.674554 sshd[2025]: Accepted publickey for core from 147.75.109.163 port 58854 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:34.676482 sshd[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:34.681469 systemd-logind[1471]: New session 2 of user core. Oct 8 20:16:34.687041 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:16:35.338087 sshd[2025]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:35.342845 systemd[1]: sshd@1-188.245.175.191:22-147.75.109.163:58854.service: Deactivated successfully. Oct 8 20:16:35.345334 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:16:35.346026 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:16:35.347199 systemd-logind[1471]: Removed session 2. Oct 8 20:16:35.505066 systemd[1]: Started sshd@2-188.245.175.191:22-147.75.109.163:58856.service - OpenSSH per-connection server daemon (147.75.109.163:58856). Oct 8 20:16:36.475681 sshd[2032]: Accepted publickey for core from 147.75.109.163 port 58856 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:36.477328 sshd[2032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:36.482410 systemd-logind[1471]: New session 3 of user core. Oct 8 20:16:36.489025 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:16:37.147012 sshd[2032]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:37.151634 systemd[1]: sshd@2-188.245.175.191:22-147.75.109.163:58856.service: Deactivated successfully. Oct 8 20:16:37.154155 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:16:37.154803 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:16:37.155906 systemd-logind[1471]: Removed session 3. Oct 8 20:16:37.311815 systemd[1]: Started sshd@3-188.245.175.191:22-147.75.109.163:60148.service - OpenSSH per-connection server daemon (147.75.109.163:60148). Oct 8 20:16:38.268655 sshd[2039]: Accepted publickey for core from 147.75.109.163 port 60148 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:38.270477 sshd[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:38.275502 systemd-logind[1471]: New session 4 of user core. Oct 8 20:16:38.282029 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:16:38.935128 sshd[2039]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:38.939333 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:16:38.939767 systemd[1]: sshd@3-188.245.175.191:22-147.75.109.163:60148.service: Deactivated successfully. Oct 8 20:16:38.941933 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:16:38.942748 systemd-logind[1471]: Removed session 4. Oct 8 20:16:39.103171 systemd[1]: Started sshd@4-188.245.175.191:22-147.75.109.163:60160.service - OpenSSH per-connection server daemon (147.75.109.163:60160). Oct 8 20:16:40.058082 sshd[2046]: Accepted publickey for core from 147.75.109.163 port 60160 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:40.059754 sshd[2046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:40.065386 systemd-logind[1471]: New session 5 of user core. Oct 8 20:16:40.074130 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:16:40.574088 sudo[2049]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:16:40.574412 sudo[2049]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:16:40.587480 sudo[2049]: pam_unix(sudo:session): session closed for user root Oct 8 20:16:40.743119 sshd[2046]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:40.746600 systemd[1]: sshd@4-188.245.175.191:22-147.75.109.163:60160.service: Deactivated successfully. Oct 8 20:16:40.748353 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:16:40.749961 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:16:40.751371 systemd-logind[1471]: Removed session 5. Oct 8 20:16:40.914359 systemd[1]: Started sshd@5-188.245.175.191:22-147.75.109.163:60176.service - OpenSSH per-connection server daemon (147.75.109.163:60176). Oct 8 20:16:41.874390 sshd[2054]: Accepted publickey for core from 147.75.109.163 port 60176 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:41.876123 sshd[2054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:41.880622 systemd-logind[1471]: New session 6 of user core. Oct 8 20:16:41.898069 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:16:42.387447 sudo[2058]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:16:42.388054 sudo[2058]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:16:42.391445 sudo[2058]: pam_unix(sudo:session): session closed for user root Oct 8 20:16:42.397055 sudo[2057]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:16:42.397374 sudo[2057]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:16:42.421099 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:16:42.423011 auditctl[2061]: No rules Oct 8 20:16:42.423488 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:16:42.423723 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:16:42.426398 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:16:42.427223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Oct 8 20:16:42.434008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:42.457146 augenrules[2082]: No rules Oct 8 20:16:42.458842 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:16:42.462234 sudo[2057]: pam_unix(sudo:session): session closed for user root Oct 8 20:16:42.565467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:42.569769 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:16:42.608135 kubelet[2092]: E1008 20:16:42.608039 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:16:42.611234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:16:42.611444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:16:42.620361 sshd[2054]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:42.623908 systemd[1]: sshd@5-188.245.175.191:22-147.75.109.163:60176.service: Deactivated successfully. Oct 8 20:16:42.625598 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:16:42.626277 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:16:42.627324 systemd-logind[1471]: Removed session 6. Oct 8 20:16:42.790969 systemd[1]: Started sshd@6-188.245.175.191:22-147.75.109.163:60184.service - OpenSSH per-connection server daemon (147.75.109.163:60184). Oct 8 20:16:43.764540 sshd[2103]: Accepted publickey for core from 147.75.109.163 port 60184 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:43.766364 sshd[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:43.771054 systemd-logind[1471]: New session 7 of user core. Oct 8 20:16:43.788126 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:16:44.285544 sudo[2106]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:16:44.285972 sudo[2106]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:16:44.554075 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:16:44.556196 (dockerd)[2122]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:16:44.782507 dockerd[2122]: time="2024-10-08T20:16:44.782440362Z" level=info msg="Starting up" Oct 8 20:16:44.869205 dockerd[2122]: time="2024-10-08T20:16:44.868727764Z" level=info msg="Loading containers: start." Oct 8 20:16:44.962128 kernel: Initializing XFRM netlink socket Oct 8 20:16:45.036665 systemd-networkd[1392]: docker0: Link UP Oct 8 20:16:45.050170 dockerd[2122]: time="2024-10-08T20:16:45.050127061Z" level=info msg="Loading containers: done." Oct 8 20:16:45.065630 dockerd[2122]: time="2024-10-08T20:16:45.065577679Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:16:45.065782 dockerd[2122]: time="2024-10-08T20:16:45.065681285Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:16:45.065807 dockerd[2122]: time="2024-10-08T20:16:45.065785483Z" level=info msg="Daemon has completed initialization" Oct 8 20:16:45.093373 dockerd[2122]: time="2024-10-08T20:16:45.092547970Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:16:45.092693 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:16:46.023440 containerd[1488]: time="2024-10-08T20:16:46.023380835Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 20:16:46.624595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202884869.mount: Deactivated successfully. Oct 8 20:16:47.917822 containerd[1488]: time="2024-10-08T20:16:47.917764294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:47.918808 containerd[1488]: time="2024-10-08T20:16:47.918643669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754189" Oct 8 20:16:47.919589 containerd[1488]: time="2024-10-08T20:16:47.919532764Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:47.921741 containerd[1488]: time="2024-10-08T20:16:47.921703726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:47.922765 containerd[1488]: time="2024-10-08T20:16:47.922581498Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 1.899149056s" Oct 8 20:16:47.922765 containerd[1488]: time="2024-10-08T20:16:47.922622075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 8 20:16:47.945236 containerd[1488]: time="2024-10-08T20:16:47.945189002Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 20:16:49.720967 containerd[1488]: time="2024-10-08T20:16:49.720896768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:49.722175 containerd[1488]: time="2024-10-08T20:16:49.721878948Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591672" Oct 8 20:16:49.722971 containerd[1488]: time="2024-10-08T20:16:49.722912656Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:49.728938 containerd[1488]: time="2024-10-08T20:16:49.727844106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:49.729576 containerd[1488]: time="2024-10-08T20:16:49.729538256Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 1.784310851s" Oct 8 20:16:49.729623 containerd[1488]: time="2024-10-08T20:16:49.729579283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 8 20:16:49.751801 containerd[1488]: time="2024-10-08T20:16:49.751757941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 20:16:50.980593 containerd[1488]: time="2024-10-08T20:16:50.980528244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:50.981524 containerd[1488]: time="2024-10-08T20:16:50.981481259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17780007" Oct 8 20:16:50.982462 containerd[1488]: time="2024-10-08T20:16:50.982420538Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:50.984946 containerd[1488]: time="2024-10-08T20:16:50.984904632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:50.986164 containerd[1488]: time="2024-10-08T20:16:50.985985370Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 1.233898775s" Oct 8 20:16:50.986164 containerd[1488]: time="2024-10-08T20:16:50.986012371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 8 20:16:51.005849 containerd[1488]: time="2024-10-08T20:16:51.005820447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 20:16:52.083770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860733479.mount: Deactivated successfully. Oct 8 20:16:52.415385 containerd[1488]: time="2024-10-08T20:16:52.415270655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:52.416282 containerd[1488]: time="2024-10-08T20:16:52.416230804Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039388" Oct 8 20:16:52.416947 containerd[1488]: time="2024-10-08T20:16:52.416908457Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:52.418671 containerd[1488]: time="2024-10-08T20:16:52.418621261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:52.419324 containerd[1488]: time="2024-10-08T20:16:52.419204546Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 1.413187526s" Oct 8 20:16:52.419324 containerd[1488]: time="2024-10-08T20:16:52.419231487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 8 20:16:52.439140 containerd[1488]: time="2024-10-08T20:16:52.439093755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:16:52.735735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. Oct 8 20:16:52.741202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:52.872022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:52.873754 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:16:52.923233 kubelet[2363]: E1008 20:16:52.923155 2363 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:16:52.926028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:16:52.926280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:16:52.973479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845610993.mount: Deactivated successfully. Oct 8 20:16:53.575792 containerd[1488]: time="2024-10-08T20:16:53.575743088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:53.576696 containerd[1488]: time="2024-10-08T20:16:53.576656638Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Oct 8 20:16:53.577621 containerd[1488]: time="2024-10-08T20:16:53.577584876Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:53.579762 containerd[1488]: time="2024-10-08T20:16:53.579730018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:53.580972 containerd[1488]: time="2024-10-08T20:16:53.580648888Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.141509609s" Oct 8 20:16:53.580972 containerd[1488]: time="2024-10-08T20:16:53.580675599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:16:53.599136 containerd[1488]: time="2024-10-08T20:16:53.599095934Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:16:54.132683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354744504.mount: Deactivated successfully. Oct 8 20:16:54.137350 containerd[1488]: time="2024-10-08T20:16:54.137297094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:54.138214 containerd[1488]: time="2024-10-08T20:16:54.138175027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Oct 8 20:16:54.138870 containerd[1488]: time="2024-10-08T20:16:54.138638284Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:54.140494 containerd[1488]: time="2024-10-08T20:16:54.140451027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:54.141183 containerd[1488]: time="2024-10-08T20:16:54.141155341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 542.029438ms" Oct 8 20:16:54.141244 containerd[1488]: time="2024-10-08T20:16:54.141185759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 20:16:54.164359 containerd[1488]: time="2024-10-08T20:16:54.164300003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 20:16:54.733086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809538778.mount: Deactivated successfully. Oct 8 20:16:56.337800 containerd[1488]: time="2024-10-08T20:16:56.337733120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:56.339023 containerd[1488]: time="2024-10-08T20:16:56.338973700Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Oct 8 20:16:56.339853 containerd[1488]: time="2024-10-08T20:16:56.339797580Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:56.343341 containerd[1488]: time="2024-10-08T20:16:56.343263473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:16:56.344631 containerd[1488]: time="2024-10-08T20:16:56.344587911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.18024667s" Oct 8 20:16:56.344700 containerd[1488]: time="2024-10-08T20:16:56.344630862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 8 20:16:58.797774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:58.806135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:58.827021 systemd[1]: Reloading requested from client PID 2542 ('systemctl') (unit session-7.scope)... Oct 8 20:16:58.827034 systemd[1]: Reloading... Oct 8 20:16:58.945062 zram_generator::config[2582]: No configuration found. Oct 8 20:16:59.036495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:16:59.104325 systemd[1]: Reloading finished in 276 ms. Oct 8 20:16:59.148726 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:16:59.148818 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:16:59.149239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:59.155073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:16:59.276998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:16:59.278078 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:16:59.315519 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:16:59.315519 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:16:59.315519 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:16:59.317079 kubelet[2636]: I1008 20:16:59.316517 2636 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:16:59.511480 kubelet[2636]: I1008 20:16:59.511443 2636 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 20:16:59.511480 kubelet[2636]: I1008 20:16:59.511469 2636 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:16:59.511718 kubelet[2636]: I1008 20:16:59.511687 2636 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 20:16:59.536244 kubelet[2636]: I1008 20:16:59.536199 2636 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:16:59.537093 kubelet[2636]: E1008 20:16:59.537057 2636 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.175.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.551845 kubelet[2636]: I1008 20:16:59.551796 2636 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:16:59.556608 kubelet[2636]: I1008 20:16:59.556537 2636 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:16:59.557949 kubelet[2636]: I1008 20:16:59.556581 2636 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-f-c5c751ca26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:16:59.558609 kubelet[2636]: I1008 20:16:59.558588 2636 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:16:59.558662 kubelet[2636]: I1008 20:16:59.558615 2636 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:16:59.560281 kubelet[2636]: I1008 20:16:59.560249 2636 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:16:59.561629 kubelet[2636]: I1008 20:16:59.561614 2636 kubelet.go:400] "Attempting to sync node with API server" Oct 8 20:16:59.561670 kubelet[2636]: I1008 20:16:59.561636 2636 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:16:59.562177 kubelet[2636]: W1008 20:16:59.562107 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.175.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-f-c5c751ca26&limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.562177 kubelet[2636]: E1008 20:16:59.562159 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.175.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-f-c5c751ca26&limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.563452 kubelet[2636]: I1008 20:16:59.562298 2636 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:16:59.563452 kubelet[2636]: I1008 20:16:59.563221 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:16:59.569069 kubelet[2636]: W1008 20:16:59.569021 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.175.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.569510 kubelet[2636]: E1008 20:16:59.569200 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.175.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.569510 kubelet[2636]: I1008 20:16:59.569293 2636 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:16:59.570715 kubelet[2636]: I1008 20:16:59.570675 2636 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:16:59.572538 kubelet[2636]: W1008 20:16:59.572505 2636 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:16:59.574301 kubelet[2636]: I1008 20:16:59.574273 2636 server.go:1264] "Started kubelet" Oct 8 20:16:59.577442 kubelet[2636]: I1008 20:16:59.576929 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:16:59.578610 kubelet[2636]: E1008 20:16:59.578529 2636 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.175.191:6443/api/v1/namespaces/default/events\": dial tcp 188.245.175.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-f-c5c751ca26.17fc9393b2e4e604 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-f-c5c751ca26,UID:ci-4081-1-0-f-c5c751ca26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-f-c5c751ca26,},FirstTimestamp:2024-10-08 20:16:59.574248964 +0000 UTC m=+0.291783325,LastTimestamp:2024-10-08 20:16:59.574248964 +0000 UTC m=+0.291783325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-f-c5c751ca26,}" Oct 8 20:16:59.582742 kubelet[2636]: I1008 20:16:59.582640 2636 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:16:59.584346 kubelet[2636]: I1008 20:16:59.583976 2636 server.go:455] "Adding debug handlers to kubelet server" Oct 8 20:16:59.585604 kubelet[2636]: I1008 20:16:59.585589 2636 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:16:59.585963 kubelet[2636]: I1008 20:16:59.585830 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:16:59.586674 kubelet[2636]: I1008 20:16:59.586178 2636 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:16:59.589623 kubelet[2636]: I1008 20:16:59.589608 2636 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 20:16:59.589774 kubelet[2636]: I1008 20:16:59.589762 2636 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:16:59.589966 kubelet[2636]: E1008 20:16:59.589909 2636 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-f-c5c751ca26?timeout=10s\": dial tcp 188.245.175.191:6443: connect: connection refused" interval="200ms" Oct 8 20:16:59.590484 kubelet[2636]: I1008 20:16:59.590439 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:16:59.593017 kubelet[2636]: W1008 20:16:59.592979 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.175.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.593293 kubelet[2636]: E1008 20:16:59.593279 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.175.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.595376 kubelet[2636]: I1008 20:16:59.594698 2636 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:16:59.595376 kubelet[2636]: I1008 20:16:59.594716 2636 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:16:59.596520 kubelet[2636]: E1008 20:16:59.596502 2636 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:16:59.609062 kubelet[2636]: I1008 20:16:59.609037 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:16:59.610226 kubelet[2636]: I1008 20:16:59.610209 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:16:59.610311 kubelet[2636]: I1008 20:16:59.610301 2636 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:16:59.610370 kubelet[2636]: I1008 20:16:59.610361 2636 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 20:16:59.610483 kubelet[2636]: E1008 20:16:59.610440 2636 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:16:59.620174 kubelet[2636]: I1008 20:16:59.619962 2636 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:16:59.620174 kubelet[2636]: I1008 20:16:59.619974 2636 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:16:59.620174 kubelet[2636]: I1008 20:16:59.619990 2636 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:16:59.620521 kubelet[2636]: W1008 20:16:59.620465 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.175.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.620612 kubelet[2636]: E1008 20:16:59.620523 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.175.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:16:59.621658 kubelet[2636]: I1008 20:16:59.621539 2636 policy_none.go:49] "None policy: Start" Oct 8 20:16:59.622589 kubelet[2636]: I1008 20:16:59.622316 2636 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:16:59.622589 kubelet[2636]: I1008 20:16:59.622345 2636 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:16:59.628505 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:16:59.642685 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:16:59.645726 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:16:59.653893 kubelet[2636]: I1008 20:16:59.653580 2636 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:16:59.653893 kubelet[2636]: I1008 20:16:59.653740 2636 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:16:59.653893 kubelet[2636]: I1008 20:16:59.653847 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:16:59.655508 kubelet[2636]: E1008 20:16:59.655496 2636 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-f-c5c751ca26\" not found" Oct 8 20:16:59.688379 kubelet[2636]: I1008 20:16:59.688361 2636 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.688899 kubelet[2636]: E1008 20:16:59.688834 2636 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.191:6443/api/v1/nodes\": dial tcp 188.245.175.191:6443: connect: connection refused" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.711293 kubelet[2636]: I1008 20:16:59.711242 2636 topology_manager.go:215] "Topology Admit Handler" podUID="b629d9214fd6b7d3cb4395ef8ce3b55f" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.713399 kubelet[2636]: I1008 20:16:59.713264 2636 topology_manager.go:215] "Topology Admit Handler" podUID="92015dc9b926e0ad7ce034ab96edd20c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.715192 kubelet[2636]: I1008 20:16:59.715019 2636 topology_manager.go:215] "Topology Admit Handler" podUID="952bc72ab4ceaf38ac1662f21822cacd" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.723167 systemd[1]: Created slice kubepods-burstable-podb629d9214fd6b7d3cb4395ef8ce3b55f.slice - libcontainer container kubepods-burstable-podb629d9214fd6b7d3cb4395ef8ce3b55f.slice. Oct 8 20:16:59.744468 systemd[1]: Created slice kubepods-burstable-pod92015dc9b926e0ad7ce034ab96edd20c.slice - libcontainer container kubepods-burstable-pod92015dc9b926e0ad7ce034ab96edd20c.slice. Oct 8 20:16:59.756592 systemd[1]: Created slice kubepods-burstable-pod952bc72ab4ceaf38ac1662f21822cacd.slice - libcontainer container kubepods-burstable-pod952bc72ab4ceaf38ac1662f21822cacd.slice. Oct 8 20:16:59.791199 kubelet[2636]: E1008 20:16:59.790751 2636 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-f-c5c751ca26?timeout=10s\": dial tcp 188.245.175.191:6443: connect: connection refused" interval="400ms" Oct 8 20:16:59.792705 kubelet[2636]: I1008 20:16:59.792548 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b629d9214fd6b7d3cb4395ef8ce3b55f-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-f-c5c751ca26\" (UID: \"b629d9214fd6b7d3cb4395ef8ce3b55f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.891855 kubelet[2636]: I1008 20:16:59.891805 2636 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.892258 kubelet[2636]: E1008 20:16:59.892203 2636 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.191:6443/api/v1/nodes\": dial tcp 188.245.175.191:6443: connect: connection refused" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893458 kubelet[2636]: I1008 20:16:59.893371 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b629d9214fd6b7d3cb4395ef8ce3b55f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-f-c5c751ca26\" (UID: \"b629d9214fd6b7d3cb4395ef8ce3b55f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893458 kubelet[2636]: I1008 20:16:59.893404 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893458 kubelet[2636]: I1008 20:16:59.893424 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893458 kubelet[2636]: I1008 20:16:59.893441 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893458 kubelet[2636]: I1008 20:16:59.893457 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893756 kubelet[2636]: I1008 20:16:59.893474 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/952bc72ab4ceaf38ac1662f21822cacd-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-f-c5c751ca26\" (UID: \"952bc72ab4ceaf38ac1662f21822cacd\") " pod="kube-system/kube-scheduler-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893756 kubelet[2636]: I1008 20:16:59.893510 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b629d9214fd6b7d3cb4395ef8ce3b55f-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-f-c5c751ca26\" (UID: \"b629d9214fd6b7d3cb4395ef8ce3b55f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:16:59.893756 kubelet[2636]: I1008 20:16:59.893548 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:00.042857 containerd[1488]: time="2024-10-08T20:17:00.042179773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-f-c5c751ca26,Uid:b629d9214fd6b7d3cb4395ef8ce3b55f,Namespace:kube-system,Attempt:0,}" Oct 8 20:17:00.057405 containerd[1488]: time="2024-10-08T20:17:00.057344398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-f-c5c751ca26,Uid:92015dc9b926e0ad7ce034ab96edd20c,Namespace:kube-system,Attempt:0,}" Oct 8 20:17:00.059996 containerd[1488]: time="2024-10-08T20:17:00.059947337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-f-c5c751ca26,Uid:952bc72ab4ceaf38ac1662f21822cacd,Namespace:kube-system,Attempt:0,}" Oct 8 20:17:00.191338 kubelet[2636]: E1008 20:17:00.191278 2636 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-f-c5c751ca26?timeout=10s\": dial tcp 188.245.175.191:6443: connect: connection refused" interval="800ms" Oct 8 20:17:00.294771 kubelet[2636]: I1008 20:17:00.294620 2636 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:00.295184 kubelet[2636]: E1008 20:17:00.295107 2636 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.191:6443/api/v1/nodes\": dial tcp 188.245.175.191:6443: connect: connection refused" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:00.415539 kubelet[2636]: W1008 20:17:00.415465 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.175.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-f-c5c751ca26&limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:00.415539 kubelet[2636]: E1008 20:17:00.415532 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.175.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-f-c5c751ca26&limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:00.521516 kubelet[2636]: W1008 20:17:00.521463 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.175.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:00.521516 kubelet[2636]: E1008 20:17:00.521513 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.175.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:00.562090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927900901.mount: Deactivated successfully. Oct 8 20:17:00.568835 containerd[1488]: time="2024-10-08T20:17:00.568779928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:17:00.569732 containerd[1488]: time="2024-10-08T20:17:00.569671878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:17:00.570629 containerd[1488]: time="2024-10-08T20:17:00.570581199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:17:00.571098 containerd[1488]: time="2024-10-08T20:17:00.571050298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Oct 8 20:17:00.572101 containerd[1488]: time="2024-10-08T20:17:00.572063606Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:17:00.573888 containerd[1488]: time="2024-10-08T20:17:00.573443419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:17:00.573888 containerd[1488]: time="2024-10-08T20:17:00.573681330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:17:00.578538 containerd[1488]: time="2024-10-08T20:17:00.578511126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:17:00.581360 containerd[1488]: time="2024-10-08T20:17:00.581337567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.318756ms" Oct 8 20:17:00.583197 containerd[1488]: time="2024-10-08T20:17:00.583093752Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.669453ms" Oct 8 20:17:00.584727 containerd[1488]: time="2024-10-08T20:17:00.584535153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.274296ms" Oct 8 20:17:00.694300 containerd[1488]: time="2024-10-08T20:17:00.694072669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:00.694300 containerd[1488]: time="2024-10-08T20:17:00.694117134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:00.694300 containerd[1488]: time="2024-10-08T20:17:00.694129918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:00.694300 containerd[1488]: time="2024-10-08T20:17:00.694198427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:00.696129 containerd[1488]: time="2024-10-08T20:17:00.695920308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:00.696129 containerd[1488]: time="2024-10-08T20:17:00.695966355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:00.696129 containerd[1488]: time="2024-10-08T20:17:00.695991564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:00.696362 containerd[1488]: time="2024-10-08T20:17:00.696081413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:00.701789 containerd[1488]: time="2024-10-08T20:17:00.701586037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:00.701789 containerd[1488]: time="2024-10-08T20:17:00.701630381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:00.701789 containerd[1488]: time="2024-10-08T20:17:00.701641792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:00.701789 containerd[1488]: time="2024-10-08T20:17:00.701707697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:00.723035 systemd[1]: Started cri-containerd-2cc99bd0a10339e5782132bea6b4364399c79726e04c50da35cbe25411a7e795.scope - libcontainer container 2cc99bd0a10339e5782132bea6b4364399c79726e04c50da35cbe25411a7e795. Oct 8 20:17:00.727253 systemd[1]: Started cri-containerd-7d6e4379a3550ca5aa2e52de65726ca3a87e309de90923fde3516b361eece4e0.scope - libcontainer container 7d6e4379a3550ca5aa2e52de65726ca3a87e309de90923fde3516b361eece4e0. Oct 8 20:17:00.734288 systemd[1]: Started cri-containerd-1a95e1b90fe7f66d10ed369650b8538d72953c291dcbe786bf72cef74d59fcad.scope - libcontainer container 1a95e1b90fe7f66d10ed369650b8538d72953c291dcbe786bf72cef74d59fcad. Oct 8 20:17:00.777197 containerd[1488]: time="2024-10-08T20:17:00.776823985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-f-c5c751ca26,Uid:92015dc9b926e0ad7ce034ab96edd20c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d6e4379a3550ca5aa2e52de65726ca3a87e309de90923fde3516b361eece4e0\"" Oct 8 20:17:00.782175 containerd[1488]: time="2024-10-08T20:17:00.782102310Z" level=info msg="CreateContainer within sandbox \"7d6e4379a3550ca5aa2e52de65726ca3a87e309de90923fde3516b361eece4e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:17:00.787987 containerd[1488]: time="2024-10-08T20:17:00.787068764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-f-c5c751ca26,Uid:b629d9214fd6b7d3cb4395ef8ce3b55f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a95e1b90fe7f66d10ed369650b8538d72953c291dcbe786bf72cef74d59fcad\"" Oct 8 20:17:00.790888 containerd[1488]: time="2024-10-08T20:17:00.790847680Z" level=info msg="CreateContainer within sandbox \"1a95e1b90fe7f66d10ed369650b8538d72953c291dcbe786bf72cef74d59fcad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:17:00.803022 containerd[1488]: time="2024-10-08T20:17:00.802927965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-f-c5c751ca26,Uid:952bc72ab4ceaf38ac1662f21822cacd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cc99bd0a10339e5782132bea6b4364399c79726e04c50da35cbe25411a7e795\"" Oct 8 20:17:00.806455 containerd[1488]: time="2024-10-08T20:17:00.806380272Z" level=info msg="CreateContainer within sandbox \"2cc99bd0a10339e5782132bea6b4364399c79726e04c50da35cbe25411a7e795\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:17:00.808921 containerd[1488]: time="2024-10-08T20:17:00.808855850Z" level=info msg="CreateContainer within sandbox \"7d6e4379a3550ca5aa2e52de65726ca3a87e309de90923fde3516b361eece4e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c\"" Oct 8 20:17:00.809348 containerd[1488]: time="2024-10-08T20:17:00.809329217Z" level=info msg="StartContainer for \"2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c\"" Oct 8 20:17:00.812123 containerd[1488]: time="2024-10-08T20:17:00.812014020Z" level=info msg="CreateContainer within sandbox \"1a95e1b90fe7f66d10ed369650b8538d72953c291dcbe786bf72cef74d59fcad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3587c1138c09f00eb008b26dad77398ae7414f99f6a0e7dcbbb0b8a9641c80e4\"" Oct 8 20:17:00.812924 containerd[1488]: time="2024-10-08T20:17:00.812531210Z" level=info msg="StartContainer for \"3587c1138c09f00eb008b26dad77398ae7414f99f6a0e7dcbbb0b8a9641c80e4\"" Oct 8 20:17:00.821589 containerd[1488]: time="2024-10-08T20:17:00.821565366Z" level=info msg="CreateContainer within sandbox \"2cc99bd0a10339e5782132bea6b4364399c79726e04c50da35cbe25411a7e795\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2\"" Oct 8 20:17:00.822101 containerd[1488]: time="2024-10-08T20:17:00.822060473Z" level=info msg="StartContainer for \"19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2\"" Oct 8 20:17:00.847014 systemd[1]: Started cri-containerd-3587c1138c09f00eb008b26dad77398ae7414f99f6a0e7dcbbb0b8a9641c80e4.scope - libcontainer container 3587c1138c09f00eb008b26dad77398ae7414f99f6a0e7dcbbb0b8a9641c80e4. Oct 8 20:17:00.852024 systemd[1]: Started cri-containerd-2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c.scope - libcontainer container 2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c. Oct 8 20:17:00.868233 systemd[1]: Started cri-containerd-19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2.scope - libcontainer container 19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2. Oct 8 20:17:00.908683 kubelet[2636]: W1008 20:17:00.908653 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.175.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:00.908683 kubelet[2636]: E1008 20:17:00.908688 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.175.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:00.914853 containerd[1488]: time="2024-10-08T20:17:00.914582029Z" level=info msg="StartContainer for \"3587c1138c09f00eb008b26dad77398ae7414f99f6a0e7dcbbb0b8a9641c80e4\" returns successfully" Oct 8 20:17:00.923434 containerd[1488]: time="2024-10-08T20:17:00.923401150Z" level=info msg="StartContainer for \"2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c\" returns successfully" Oct 8 20:17:00.936552 containerd[1488]: time="2024-10-08T20:17:00.936515602Z" level=info msg="StartContainer for \"19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2\" returns successfully" Oct 8 20:17:00.992287 kubelet[2636]: E1008 20:17:00.992239 2636 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-f-c5c751ca26?timeout=10s\": dial tcp 188.245.175.191:6443: connect: connection refused" interval="1.6s" Oct 8 20:17:01.052044 kubelet[2636]: W1008 20:17:01.051979 2636 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.175.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:01.052044 kubelet[2636]: E1008 20:17:01.052039 2636 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.175.191:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.191:6443: connect: connection refused Oct 8 20:17:01.097204 kubelet[2636]: I1008 20:17:01.097115 2636 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:01.097415 kubelet[2636]: E1008 20:17:01.097392 2636 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.191:6443/api/v1/nodes\": dial tcp 188.245.175.191:6443: connect: connection refused" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:02.569546 kubelet[2636]: I1008 20:17:02.569490 2636 apiserver.go:52] "Watching apiserver" Oct 8 20:17:02.590824 kubelet[2636]: I1008 20:17:02.590775 2636 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 20:17:02.596758 kubelet[2636]: E1008 20:17:02.596701 2636 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-1-0-f-c5c751ca26\" not found" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:02.632837 kubelet[2636]: E1008 20:17:02.632797 2636 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-1-0-f-c5c751ca26" not found Oct 8 20:17:02.700493 kubelet[2636]: I1008 20:17:02.700435 2636 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:02.709261 kubelet[2636]: I1008 20:17:02.708584 2636 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:04.222499 systemd[1]: Reloading requested from client PID 2909 ('systemctl') (unit session-7.scope)... Oct 8 20:17:04.222517 systemd[1]: Reloading... Oct 8 20:17:04.360952 zram_generator::config[2958]: No configuration found. Oct 8 20:17:04.469422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:17:04.567962 systemd[1]: Reloading finished in 345 ms. Oct 8 20:17:04.620486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:17:04.634724 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:17:04.635299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:17:04.645166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:17:04.820126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:17:04.822896 (kubelet)[3000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:17:04.875289 kubelet[3000]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:17:04.875289 kubelet[3000]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:17:04.875289 kubelet[3000]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:17:04.878925 kubelet[3000]: I1008 20:17:04.878260 3000 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:17:04.888636 kubelet[3000]: I1008 20:17:04.888596 3000 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 20:17:04.888636 kubelet[3000]: I1008 20:17:04.888618 3000 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:17:04.888825 kubelet[3000]: I1008 20:17:04.888798 3000 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 20:17:04.890588 kubelet[3000]: I1008 20:17:04.890478 3000 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:17:04.892376 kubelet[3000]: I1008 20:17:04.891841 3000 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:17:04.900059 kubelet[3000]: I1008 20:17:04.900031 3000 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:17:04.900768 kubelet[3000]: I1008 20:17:04.900533 3000 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:17:04.900768 kubelet[3000]: I1008 20:17:04.900572 3000 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-f-c5c751ca26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:17:04.901404 kubelet[3000]: I1008 20:17:04.901383 3000 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:17:04.901404 kubelet[3000]: I1008 20:17:04.901403 3000 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:17:04.901476 kubelet[3000]: I1008 20:17:04.901465 3000 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:17:04.902252 kubelet[3000]: I1008 20:17:04.901583 3000 kubelet.go:400] "Attempting to sync node with API server" Oct 8 20:17:04.902252 kubelet[3000]: I1008 20:17:04.901628 3000 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:17:04.902252 kubelet[3000]: I1008 20:17:04.901650 3000 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:17:04.902252 kubelet[3000]: I1008 20:17:04.901667 3000 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:17:04.905491 kubelet[3000]: I1008 20:17:04.905465 3000 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:17:04.905751 kubelet[3000]: I1008 20:17:04.905736 3000 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:17:04.906322 kubelet[3000]: I1008 20:17:04.906304 3000 server.go:1264] "Started kubelet" Oct 8 20:17:04.910314 kubelet[3000]: I1008 20:17:04.908698 3000 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:17:04.923308 kubelet[3000]: I1008 20:17:04.923273 3000 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:17:04.926055 kubelet[3000]: I1008 20:17:04.925820 3000 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 20:17:04.926055 kubelet[3000]: I1008 20:17:04.923523 3000 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:17:04.930389 kubelet[3000]: I1008 20:17:04.930367 3000 server.go:455] "Adding debug handlers to kubelet server" Oct 8 20:17:04.934608 kubelet[3000]: I1008 20:17:04.934023 3000 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:17:04.934608 kubelet[3000]: I1008 20:17:04.923592 3000 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:17:04.934608 kubelet[3000]: I1008 20:17:04.934539 3000 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:17:04.935050 kubelet[3000]: I1008 20:17:04.935030 3000 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:17:04.935257 kubelet[3000]: I1008 20:17:04.935226 3000 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:17:04.940661 kubelet[3000]: E1008 20:17:04.940635 3000 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:17:04.943091 kubelet[3000]: I1008 20:17:04.943072 3000 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:17:04.947481 kubelet[3000]: I1008 20:17:04.946473 3000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:17:04.947805 kubelet[3000]: I1008 20:17:04.947778 3000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:17:04.947851 kubelet[3000]: I1008 20:17:04.947818 3000 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:17:04.947851 kubelet[3000]: I1008 20:17:04.947841 3000 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 20:17:04.947953 kubelet[3000]: E1008 20:17:04.947931 3000 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:17:05.002129 kubelet[3000]: I1008 20:17:05.002105 3000 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:17:05.002460 kubelet[3000]: I1008 20:17:05.002340 3000 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:17:05.002460 kubelet[3000]: I1008 20:17:05.002362 3000 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:17:05.002730 kubelet[3000]: I1008 20:17:05.002657 3000 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:17:05.002730 kubelet[3000]: I1008 20:17:05.002671 3000 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:17:05.002730 kubelet[3000]: I1008 20:17:05.002696 3000 policy_none.go:49] "None policy: Start" Oct 8 20:17:05.003705 kubelet[3000]: I1008 20:17:05.003496 3000 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:17:05.003705 kubelet[3000]: I1008 20:17:05.003522 3000 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:17:05.003705 kubelet[3000]: I1008 20:17:05.003643 3000 state_mem.go:75] "Updated machine memory state" Oct 8 20:17:05.008973 kubelet[3000]: I1008 20:17:05.008928 3000 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:17:05.009295 kubelet[3000]: I1008 20:17:05.009190 3000 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:17:05.009408 kubelet[3000]: I1008 20:17:05.009388 3000 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:17:05.029240 kubelet[3000]: I1008 20:17:05.029200 3000 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.035766 kubelet[3000]: I1008 20:17:05.035735 3000 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.035946 kubelet[3000]: I1008 20:17:05.035798 3000 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.048204 kubelet[3000]: I1008 20:17:05.048159 3000 topology_manager.go:215] "Topology Admit Handler" podUID="b629d9214fd6b7d3cb4395ef8ce3b55f" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.048304 kubelet[3000]: I1008 20:17:05.048231 3000 topology_manager.go:215] "Topology Admit Handler" podUID="92015dc9b926e0ad7ce034ab96edd20c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.048304 kubelet[3000]: I1008 20:17:05.048276 3000 topology_manager.go:215] "Topology Admit Handler" podUID="952bc72ab4ceaf38ac1662f21822cacd" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135620 kubelet[3000]: I1008 20:17:05.135421 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135620 kubelet[3000]: I1008 20:17:05.135467 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135620 kubelet[3000]: I1008 20:17:05.135489 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135620 kubelet[3000]: I1008 20:17:05.135506 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b629d9214fd6b7d3cb4395ef8ce3b55f-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-f-c5c751ca26\" (UID: \"b629d9214fd6b7d3cb4395ef8ce3b55f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135620 kubelet[3000]: I1008 20:17:05.135521 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b629d9214fd6b7d3cb4395ef8ce3b55f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-f-c5c751ca26\" (UID: \"b629d9214fd6b7d3cb4395ef8ce3b55f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135888 kubelet[3000]: I1008 20:17:05.135536 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135888 kubelet[3000]: I1008 20:17:05.135550 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/952bc72ab4ceaf38ac1662f21822cacd-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-f-c5c751ca26\" (UID: \"952bc72ab4ceaf38ac1662f21822cacd\") " pod="kube-system/kube-scheduler-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135888 kubelet[3000]: I1008 20:17:05.135565 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b629d9214fd6b7d3cb4395ef8ce3b55f-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-f-c5c751ca26\" (UID: \"b629d9214fd6b7d3cb4395ef8ce3b55f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.135888 kubelet[3000]: I1008 20:17:05.135579 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92015dc9b926e0ad7ce034ab96edd20c-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-f-c5c751ca26\" (UID: \"92015dc9b926e0ad7ce034ab96edd20c\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:05.903276 kubelet[3000]: I1008 20:17:05.903020 3000 apiserver.go:52] "Watching apiserver" Oct 8 20:17:05.926539 kubelet[3000]: I1008 20:17:05.926376 3000 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 20:17:06.030534 kubelet[3000]: E1008 20:17:06.030100 3000 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-1-0-f-c5c751ca26\" already exists" pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" Oct 8 20:17:06.084066 kubelet[3000]: I1008 20:17:06.083981 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-f-c5c751ca26" podStartSLOduration=1.083960226 podStartE2EDuration="1.083960226s" podCreationTimestamp="2024-10-08 20:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:17:06.063432379 +0000 UTC m=+1.235016757" watchObservedRunningTime="2024-10-08 20:17:06.083960226 +0000 UTC m=+1.255544603" Oct 8 20:17:06.107352 kubelet[3000]: I1008 20:17:06.105829 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-f-c5c751ca26" podStartSLOduration=1.105798392 podStartE2EDuration="1.105798392s" podCreationTimestamp="2024-10-08 20:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:17:06.086015096 +0000 UTC m=+1.257599484" watchObservedRunningTime="2024-10-08 20:17:06.105798392 +0000 UTC m=+1.277382771" Oct 8 20:17:06.107352 kubelet[3000]: I1008 20:17:06.105914 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-f-c5c751ca26" podStartSLOduration=1.105908912 podStartE2EDuration="1.105908912s" podCreationTimestamp="2024-10-08 20:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:17:06.104206439 +0000 UTC m=+1.275790817" watchObservedRunningTime="2024-10-08 20:17:06.105908912 +0000 UTC m=+1.277493290" Oct 8 20:17:09.345907 sudo[2106]: pam_unix(sudo:session): session closed for user root Oct 8 20:17:09.504979 sshd[2103]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:09.508520 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:17:09.508705 systemd[1]: sshd@6-188.245.175.191:22-147.75.109.163:60184.service: Deactivated successfully. Oct 8 20:17:09.510154 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:17:09.510392 systemd[1]: session-7.scope: Consumed 4.037s CPU time, 188.0M memory peak, 0B memory swap peak. Oct 8 20:17:09.511379 systemd-logind[1471]: Removed session 7. Oct 8 20:17:18.985086 kubelet[3000]: I1008 20:17:18.985055 3000 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:17:18.986272 containerd[1488]: time="2024-10-08T20:17:18.986219535Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:17:18.987492 kubelet[3000]: I1008 20:17:18.986404 3000 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:17:19.753938 kubelet[3000]: I1008 20:17:19.753851 3000 topology_manager.go:215] "Topology Admit Handler" podUID="a90c9915-aa43-4101-9ccf-9c1c22b8b504" podNamespace="kube-system" podName="kube-proxy-4khd8" Oct 8 20:17:19.773210 systemd[1]: Created slice kubepods-besteffort-poda90c9915_aa43_4101_9ccf_9c1c22b8b504.slice - libcontainer container kubepods-besteffort-poda90c9915_aa43_4101_9ccf_9c1c22b8b504.slice. Oct 8 20:17:19.841507 kubelet[3000]: I1008 20:17:19.841473 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a90c9915-aa43-4101-9ccf-9c1c22b8b504-lib-modules\") pod \"kube-proxy-4khd8\" (UID: \"a90c9915-aa43-4101-9ccf-9c1c22b8b504\") " pod="kube-system/kube-proxy-4khd8" Oct 8 20:17:19.841507 kubelet[3000]: I1008 20:17:19.841509 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a90c9915-aa43-4101-9ccf-9c1c22b8b504-xtables-lock\") pod \"kube-proxy-4khd8\" (UID: \"a90c9915-aa43-4101-9ccf-9c1c22b8b504\") " pod="kube-system/kube-proxy-4khd8" Oct 8 20:17:19.841659 kubelet[3000]: I1008 20:17:19.841526 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a90c9915-aa43-4101-9ccf-9c1c22b8b504-kube-proxy\") pod \"kube-proxy-4khd8\" (UID: \"a90c9915-aa43-4101-9ccf-9c1c22b8b504\") " pod="kube-system/kube-proxy-4khd8" Oct 8 20:17:19.841659 kubelet[3000]: I1008 20:17:19.841540 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf54n\" (UniqueName: \"kubernetes.io/projected/a90c9915-aa43-4101-9ccf-9c1c22b8b504-kube-api-access-kf54n\") pod \"kube-proxy-4khd8\" (UID: \"a90c9915-aa43-4101-9ccf-9c1c22b8b504\") " pod="kube-system/kube-proxy-4khd8" Oct 8 20:17:19.962801 kubelet[3000]: I1008 20:17:19.962111 3000 topology_manager.go:215] "Topology Admit Handler" podUID="f10ac2b2-691c-47f5-8c19-6ca303f1a041" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-xtdws" Oct 8 20:17:19.971252 systemd[1]: Created slice kubepods-besteffort-podf10ac2b2_691c_47f5_8c19_6ca303f1a041.slice - libcontainer container kubepods-besteffort-podf10ac2b2_691c_47f5_8c19_6ca303f1a041.slice. Oct 8 20:17:20.042979 kubelet[3000]: I1008 20:17:20.042813 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f10ac2b2-691c-47f5-8c19-6ca303f1a041-var-lib-calico\") pod \"tigera-operator-77f994b5bb-xtdws\" (UID: \"f10ac2b2-691c-47f5-8c19-6ca303f1a041\") " pod="tigera-operator/tigera-operator-77f994b5bb-xtdws" Oct 8 20:17:20.042979 kubelet[3000]: I1008 20:17:20.042878 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4r2\" (UniqueName: \"kubernetes.io/projected/f10ac2b2-691c-47f5-8c19-6ca303f1a041-kube-api-access-5j4r2\") pod \"tigera-operator-77f994b5bb-xtdws\" (UID: \"f10ac2b2-691c-47f5-8c19-6ca303f1a041\") " pod="tigera-operator/tigera-operator-77f994b5bb-xtdws" Oct 8 20:17:20.082629 containerd[1488]: time="2024-10-08T20:17:20.082503918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4khd8,Uid:a90c9915-aa43-4101-9ccf-9c1c22b8b504,Namespace:kube-system,Attempt:0,}" Oct 8 20:17:20.106387 containerd[1488]: time="2024-10-08T20:17:20.106293903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:20.106387 containerd[1488]: time="2024-10-08T20:17:20.106335302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:20.106387 containerd[1488]: time="2024-10-08T20:17:20.106345301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:20.106663 containerd[1488]: time="2024-10-08T20:17:20.106413069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:20.123163 systemd[1]: run-containerd-runc-k8s.io-75197cb1e92c6668cd30c3f08eb3d318e6f2c580cac3aa5d050b7825a89a89b5-runc.9FXzZl.mount: Deactivated successfully. Oct 8 20:17:20.131990 systemd[1]: Started cri-containerd-75197cb1e92c6668cd30c3f08eb3d318e6f2c580cac3aa5d050b7825a89a89b5.scope - libcontainer container 75197cb1e92c6668cd30c3f08eb3d318e6f2c580cac3aa5d050b7825a89a89b5. Oct 8 20:17:20.155797 containerd[1488]: time="2024-10-08T20:17:20.155426191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4khd8,Uid:a90c9915-aa43-4101-9ccf-9c1c22b8b504,Namespace:kube-system,Attempt:0,} returns sandbox id \"75197cb1e92c6668cd30c3f08eb3d318e6f2c580cac3aa5d050b7825a89a89b5\"" Oct 8 20:17:20.158573 containerd[1488]: time="2024-10-08T20:17:20.158550224Z" level=info msg="CreateContainer within sandbox \"75197cb1e92c6668cd30c3f08eb3d318e6f2c580cac3aa5d050b7825a89a89b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:17:20.170559 containerd[1488]: time="2024-10-08T20:17:20.170498272Z" level=info msg="CreateContainer within sandbox \"75197cb1e92c6668cd30c3f08eb3d318e6f2c580cac3aa5d050b7825a89a89b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e25f86191d75ee3070efa6f4d5d730a8801df850baad174f6aba4ed89b5f91f\"" Oct 8 20:17:20.171686 containerd[1488]: time="2024-10-08T20:17:20.171071157Z" level=info msg="StartContainer for \"8e25f86191d75ee3070efa6f4d5d730a8801df850baad174f6aba4ed89b5f91f\"" Oct 8 20:17:20.194985 systemd[1]: Started cri-containerd-8e25f86191d75ee3070efa6f4d5d730a8801df850baad174f6aba4ed89b5f91f.scope - libcontainer container 8e25f86191d75ee3070efa6f4d5d730a8801df850baad174f6aba4ed89b5f91f. Oct 8 20:17:20.220456 containerd[1488]: time="2024-10-08T20:17:20.220420073Z" level=info msg="StartContainer for \"8e25f86191d75ee3070efa6f4d5d730a8801df850baad174f6aba4ed89b5f91f\" returns successfully" Oct 8 20:17:20.274481 containerd[1488]: time="2024-10-08T20:17:20.274438089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-xtdws,Uid:f10ac2b2-691c-47f5-8c19-6ca303f1a041,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:17:20.305390 containerd[1488]: time="2024-10-08T20:17:20.304829730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:20.305390 containerd[1488]: time="2024-10-08T20:17:20.304930311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:20.305390 containerd[1488]: time="2024-10-08T20:17:20.304945330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:20.306147 containerd[1488]: time="2024-10-08T20:17:20.306096739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:20.323023 systemd[1]: Started cri-containerd-9143de4a143bffe3ec6e2b493bcf0a32ad08afdc38101517bac4b1c106fb9768.scope - libcontainer container 9143de4a143bffe3ec6e2b493bcf0a32ad08afdc38101517bac4b1c106fb9768. Oct 8 20:17:20.364927 containerd[1488]: time="2024-10-08T20:17:20.364851262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-xtdws,Uid:f10ac2b2-691c-47f5-8c19-6ca303f1a041,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9143de4a143bffe3ec6e2b493bcf0a32ad08afdc38101517bac4b1c106fb9768\"" Oct 8 20:17:20.367406 containerd[1488]: time="2024-10-08T20:17:20.367250503Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:17:21.020973 kubelet[3000]: I1008 20:17:21.020905 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4khd8" podStartSLOduration=2.020883558 podStartE2EDuration="2.020883558s" podCreationTimestamp="2024-10-08 20:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:17:21.020628395 +0000 UTC m=+16.192212843" watchObservedRunningTime="2024-10-08 20:17:21.020883558 +0000 UTC m=+16.192467986" Oct 8 20:17:22.014251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144347768.mount: Deactivated successfully. Oct 8 20:17:22.370364 containerd[1488]: time="2024-10-08T20:17:22.370225672Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:22.371613 containerd[1488]: time="2024-10-08T20:17:22.371438017Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136505" Oct 8 20:17:22.372210 containerd[1488]: time="2024-10-08T20:17:22.372170464Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:22.373947 containerd[1488]: time="2024-10-08T20:17:22.373917992Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:22.374708 containerd[1488]: time="2024-10-08T20:17:22.374595234Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.007319934s" Oct 8 20:17:22.374708 containerd[1488]: time="2024-10-08T20:17:22.374624368Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 20:17:22.379096 containerd[1488]: time="2024-10-08T20:17:22.379008147Z" level=info msg="CreateContainer within sandbox \"9143de4a143bffe3ec6e2b493bcf0a32ad08afdc38101517bac4b1c106fb9768\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:17:22.403310 containerd[1488]: time="2024-10-08T20:17:22.403275755Z" level=info msg="CreateContainer within sandbox \"9143de4a143bffe3ec6e2b493bcf0a32ad08afdc38101517bac4b1c106fb9768\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853\"" Oct 8 20:17:22.404766 containerd[1488]: time="2024-10-08T20:17:22.403797382Z" level=info msg="StartContainer for \"072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853\"" Oct 8 20:17:22.445016 systemd[1]: Started cri-containerd-072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853.scope - libcontainer container 072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853. Oct 8 20:17:22.469722 containerd[1488]: time="2024-10-08T20:17:22.469678122Z" level=info msg="StartContainer for \"072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853\" returns successfully" Oct 8 20:17:24.968323 kubelet[3000]: I1008 20:17:24.968280 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-xtdws" podStartSLOduration=3.957339232 podStartE2EDuration="5.96826399s" podCreationTimestamp="2024-10-08 20:17:19 +0000 UTC" firstStartedPulling="2024-10-08 20:17:20.366723064 +0000 UTC m=+15.538307442" lastFinishedPulling="2024-10-08 20:17:22.377647822 +0000 UTC m=+17.549232200" observedRunningTime="2024-10-08 20:17:23.023902153 +0000 UTC m=+18.195486531" watchObservedRunningTime="2024-10-08 20:17:24.96826399 +0000 UTC m=+20.139848368" Oct 8 20:17:25.314150 kubelet[3000]: I1008 20:17:25.314102 3000 topology_manager.go:215] "Topology Admit Handler" podUID="3d59cb8d-780e-4f0b-b234-ac480cf2354f" podNamespace="calico-system" podName="calico-typha-84cb7c9cf-5qzxd" Oct 8 20:17:25.327670 systemd[1]: Created slice kubepods-besteffort-pod3d59cb8d_780e_4f0b_b234_ac480cf2354f.slice - libcontainer container kubepods-besteffort-pod3d59cb8d_780e_4f0b_b234_ac480cf2354f.slice. Oct 8 20:17:25.376301 kubelet[3000]: I1008 20:17:25.376247 3000 topology_manager.go:215] "Topology Admit Handler" podUID="fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9" podNamespace="calico-system" podName="calico-node-d5llr" Oct 8 20:17:25.378334 kubelet[3000]: I1008 20:17:25.378306 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d59cb8d-780e-4f0b-b234-ac480cf2354f-tigera-ca-bundle\") pod \"calico-typha-84cb7c9cf-5qzxd\" (UID: \"3d59cb8d-780e-4f0b-b234-ac480cf2354f\") " pod="calico-system/calico-typha-84cb7c9cf-5qzxd" Oct 8 20:17:25.378418 kubelet[3000]: I1008 20:17:25.378344 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d59cb8d-780e-4f0b-b234-ac480cf2354f-typha-certs\") pod \"calico-typha-84cb7c9cf-5qzxd\" (UID: \"3d59cb8d-780e-4f0b-b234-ac480cf2354f\") " pod="calico-system/calico-typha-84cb7c9cf-5qzxd" Oct 8 20:17:25.378418 kubelet[3000]: I1008 20:17:25.378367 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rtm\" (UniqueName: \"kubernetes.io/projected/3d59cb8d-780e-4f0b-b234-ac480cf2354f-kube-api-access-t8rtm\") pod \"calico-typha-84cb7c9cf-5qzxd\" (UID: \"3d59cb8d-780e-4f0b-b234-ac480cf2354f\") " pod="calico-system/calico-typha-84cb7c9cf-5qzxd" Oct 8 20:17:25.397912 systemd[1]: Created slice kubepods-besteffort-podfa1c824c_f769_4ea5_a6f3_ef6a6c852ec9.slice - libcontainer container kubepods-besteffort-podfa1c824c_f769_4ea5_a6f3_ef6a6c852ec9.slice. Oct 8 20:17:25.478897 kubelet[3000]: I1008 20:17:25.478701 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-flexvol-driver-host\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.478897 kubelet[3000]: I1008 20:17:25.478741 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-xtables-lock\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.478897 kubelet[3000]: I1008 20:17:25.478760 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-policysync\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.478897 kubelet[3000]: I1008 20:17:25.478774 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-cni-log-dir\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.478897 kubelet[3000]: I1008 20:17:25.478788 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-lib-modules\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.479130 kubelet[3000]: I1008 20:17:25.478800 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-node-certs\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.479130 kubelet[3000]: I1008 20:17:25.478840 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-var-lib-calico\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.479130 kubelet[3000]: I1008 20:17:25.478870 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-cni-net-dir\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.481928 kubelet[3000]: I1008 20:17:25.479566 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tb9h\" (UniqueName: \"kubernetes.io/projected/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-kube-api-access-9tb9h\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.481928 kubelet[3000]: I1008 20:17:25.479595 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-tigera-ca-bundle\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.481928 kubelet[3000]: I1008 20:17:25.479627 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-var-run-calico\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.481928 kubelet[3000]: I1008 20:17:25.479639 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9-cni-bin-dir\") pod \"calico-node-d5llr\" (UID: \"fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9\") " pod="calico-system/calico-node-d5llr" Oct 8 20:17:25.507308 kubelet[3000]: I1008 20:17:25.507271 3000 topology_manager.go:215] "Topology Admit Handler" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" podNamespace="calico-system" podName="csi-node-driver-nggfm" Oct 8 20:17:25.507640 kubelet[3000]: E1008 20:17:25.507617 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:25.580408 kubelet[3000]: I1008 20:17:25.580224 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4acbfc0b-c482-45b1-9dfd-be4ca5e86826-socket-dir\") pod \"csi-node-driver-nggfm\" (UID: \"4acbfc0b-c482-45b1-9dfd-be4ca5e86826\") " pod="calico-system/csi-node-driver-nggfm" Oct 8 20:17:25.580408 kubelet[3000]: I1008 20:17:25.580270 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4acbfc0b-c482-45b1-9dfd-be4ca5e86826-registration-dir\") pod \"csi-node-driver-nggfm\" (UID: \"4acbfc0b-c482-45b1-9dfd-be4ca5e86826\") " pod="calico-system/csi-node-driver-nggfm" Oct 8 20:17:25.580408 kubelet[3000]: I1008 20:17:25.580319 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4acbfc0b-c482-45b1-9dfd-be4ca5e86826-kubelet-dir\") pod \"csi-node-driver-nggfm\" (UID: \"4acbfc0b-c482-45b1-9dfd-be4ca5e86826\") " pod="calico-system/csi-node-driver-nggfm" Oct 8 20:17:25.580408 kubelet[3000]: I1008 20:17:25.580390 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvcj\" (UniqueName: \"kubernetes.io/projected/4acbfc0b-c482-45b1-9dfd-be4ca5e86826-kube-api-access-dfvcj\") pod \"csi-node-driver-nggfm\" (UID: \"4acbfc0b-c482-45b1-9dfd-be4ca5e86826\") " pod="calico-system/csi-node-driver-nggfm" Oct 8 20:17:25.580767 kubelet[3000]: I1008 20:17:25.580425 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4acbfc0b-c482-45b1-9dfd-be4ca5e86826-varrun\") pod \"csi-node-driver-nggfm\" (UID: \"4acbfc0b-c482-45b1-9dfd-be4ca5e86826\") " pod="calico-system/csi-node-driver-nggfm" Oct 8 20:17:25.586392 kubelet[3000]: E1008 20:17:25.585886 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.586392 kubelet[3000]: W1008 20:17:25.585909 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.590986 kubelet[3000]: E1008 20:17:25.590963 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.590986 kubelet[3000]: W1008 20:17:25.590984 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.591138 kubelet[3000]: E1008 20:17:25.591006 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.591357 kubelet[3000]: E1008 20:17:25.591326 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.591357 kubelet[3000]: W1008 20:17:25.591343 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.591887 kubelet[3000]: E1008 20:17:25.591743 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.591887 kubelet[3000]: E1008 20:17:25.591698 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.593916 kubelet[3000]: E1008 20:17:25.592493 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.593916 kubelet[3000]: W1008 20:17:25.592506 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.593916 kubelet[3000]: E1008 20:17:25.592598 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.594206 kubelet[3000]: E1008 20:17:25.594183 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.594206 kubelet[3000]: W1008 20:17:25.594201 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.594313 kubelet[3000]: E1008 20:17:25.594295 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.594530 kubelet[3000]: E1008 20:17:25.594504 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.594530 kubelet[3000]: W1008 20:17:25.594521 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.594902 kubelet[3000]: E1008 20:17:25.594611 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.594902 kubelet[3000]: E1008 20:17:25.594772 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.594902 kubelet[3000]: W1008 20:17:25.594782 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.594902 kubelet[3000]: E1008 20:17:25.594883 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.595066 kubelet[3000]: E1008 20:17:25.595051 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.595066 kubelet[3000]: W1008 20:17:25.595060 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.595145 kubelet[3000]: E1008 20:17:25.595076 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.595350 kubelet[3000]: E1008 20:17:25.595328 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.595350 kubelet[3000]: W1008 20:17:25.595345 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.595441 kubelet[3000]: E1008 20:17:25.595374 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.595741 kubelet[3000]: E1008 20:17:25.595641 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.595741 kubelet[3000]: W1008 20:17:25.595653 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.595741 kubelet[3000]: E1008 20:17:25.595667 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.596667 kubelet[3000]: E1008 20:17:25.595943 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.596667 kubelet[3000]: W1008 20:17:25.595954 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.596667 kubelet[3000]: E1008 20:17:25.595982 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.596667 kubelet[3000]: E1008 20:17:25.596223 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.596667 kubelet[3000]: W1008 20:17:25.596232 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.596667 kubelet[3000]: E1008 20:17:25.596242 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.596667 kubelet[3000]: E1008 20:17:25.596503 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.596667 kubelet[3000]: W1008 20:17:25.596514 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.596667 kubelet[3000]: E1008 20:17:25.596526 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.617971 kubelet[3000]: E1008 20:17:25.617705 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.617971 kubelet[3000]: W1008 20:17:25.617725 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.617971 kubelet[3000]: E1008 20:17:25.617746 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.643872 containerd[1488]: time="2024-10-08T20:17:25.643808409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84cb7c9cf-5qzxd,Uid:3d59cb8d-780e-4f0b-b234-ac480cf2354f,Namespace:calico-system,Attempt:0,}" Oct 8 20:17:25.680437 containerd[1488]: time="2024-10-08T20:17:25.679373180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:25.680437 containerd[1488]: time="2024-10-08T20:17:25.679440708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:25.680437 containerd[1488]: time="2024-10-08T20:17:25.679452921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:25.680437 containerd[1488]: time="2024-10-08T20:17:25.679545346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:25.682805 kubelet[3000]: E1008 20:17:25.681910 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.682805 kubelet[3000]: W1008 20:17:25.681929 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.682805 kubelet[3000]: E1008 20:17:25.681948 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.682805 kubelet[3000]: E1008 20:17:25.682149 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.682805 kubelet[3000]: W1008 20:17:25.682157 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.682805 kubelet[3000]: E1008 20:17:25.682165 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.683131 kubelet[3000]: E1008 20:17:25.682996 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.683131 kubelet[3000]: W1008 20:17:25.683005 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.683131 kubelet[3000]: E1008 20:17:25.683027 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.683336 kubelet[3000]: E1008 20:17:25.683302 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.683336 kubelet[3000]: W1008 20:17:25.683316 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.683336 kubelet[3000]: E1008 20:17:25.683327 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.683939 kubelet[3000]: E1008 20:17:25.683920 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.683939 kubelet[3000]: W1008 20:17:25.683934 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.684000 kubelet[3000]: E1008 20:17:25.683956 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.684363 kubelet[3000]: E1008 20:17:25.684351 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.684468 kubelet[3000]: W1008 20:17:25.684416 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.684468 kubelet[3000]: E1008 20:17:25.684450 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.684764 kubelet[3000]: E1008 20:17:25.684678 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.684764 kubelet[3000]: W1008 20:17:25.684688 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.684967 kubelet[3000]: E1008 20:17:25.684850 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.685212 kubelet[3000]: E1008 20:17:25.685078 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.685212 kubelet[3000]: W1008 20:17:25.685103 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.685294 kubelet[3000]: E1008 20:17:25.685282 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.685515 kubelet[3000]: E1008 20:17:25.685432 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.685515 kubelet[3000]: W1008 20:17:25.685441 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.685600 kubelet[3000]: E1008 20:17:25.685588 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.685805 kubelet[3000]: E1008 20:17:25.685743 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.685805 kubelet[3000]: W1008 20:17:25.685751 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.685919 kubelet[3000]: E1008 20:17:25.685838 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.686168 kubelet[3000]: E1008 20:17:25.686100 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.686168 kubelet[3000]: W1008 20:17:25.686109 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.686362 kubelet[3000]: E1008 20:17:25.686299 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.686362 kubelet[3000]: E1008 20:17:25.686344 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.686362 kubelet[3000]: W1008 20:17:25.686350 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.686609 kubelet[3000]: E1008 20:17:25.686532 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.686835 kubelet[3000]: E1008 20:17:25.686684 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.686835 kubelet[3000]: W1008 20:17:25.686693 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.686835 kubelet[3000]: E1008 20:17:25.686762 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.687902 kubelet[3000]: E1008 20:17:25.687784 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.687902 kubelet[3000]: W1008 20:17:25.687795 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.688023 kubelet[3000]: E1008 20:17:25.688003 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.688246 kubelet[3000]: E1008 20:17:25.688159 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.688246 kubelet[3000]: W1008 20:17:25.688170 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.688331 kubelet[3000]: E1008 20:17:25.688318 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.688463 kubelet[3000]: E1008 20:17:25.688415 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.688463 kubelet[3000]: W1008 20:17:25.688425 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.688592 kubelet[3000]: E1008 20:17:25.688538 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.688755 kubelet[3000]: E1008 20:17:25.688733 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.688755 kubelet[3000]: W1008 20:17:25.688743 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.688940 kubelet[3000]: E1008 20:17:25.688884 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.689279 kubelet[3000]: E1008 20:17:25.689174 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.689279 kubelet[3000]: W1008 20:17:25.689186 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.689537 kubelet[3000]: E1008 20:17:25.689523 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.690564 kubelet[3000]: E1008 20:17:25.690448 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.690564 kubelet[3000]: W1008 20:17:25.690502 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.690564 kubelet[3000]: E1008 20:17:25.690537 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.691047 kubelet[3000]: E1008 20:17:25.690934 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.691047 kubelet[3000]: W1008 20:17:25.690944 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.691207 kubelet[3000]: E1008 20:17:25.691133 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.691356 kubelet[3000]: E1008 20:17:25.691332 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.691712 kubelet[3000]: W1008 20:17:25.691620 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.691825 kubelet[3000]: E1008 20:17:25.691791 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.692149 kubelet[3000]: E1008 20:17:25.692060 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.692149 kubelet[3000]: W1008 20:17:25.692070 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.692220 kubelet[3000]: E1008 20:17:25.692208 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.692501 kubelet[3000]: E1008 20:17:25.692408 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.692501 kubelet[3000]: W1008 20:17:25.692419 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.692608 kubelet[3000]: E1008 20:17:25.692582 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.692769 kubelet[3000]: E1008 20:17:25.692737 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.692769 kubelet[3000]: W1008 20:17:25.692746 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.693000 kubelet[3000]: E1008 20:17:25.692988 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.693718 kubelet[3000]: E1008 20:17:25.693205 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.693718 kubelet[3000]: W1008 20:17:25.693215 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.693718 kubelet[3000]: E1008 20:17:25.693223 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.705038 kubelet[3000]: E1008 20:17:25.704364 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:25.707056 kubelet[3000]: W1008 20:17:25.707037 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:25.707146 kubelet[3000]: E1008 20:17:25.707132 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:25.708366 containerd[1488]: time="2024-10-08T20:17:25.708065192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5llr,Uid:fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9,Namespace:calico-system,Attempt:0,}" Oct 8 20:17:25.708460 systemd[1]: Started cri-containerd-b0bcdff8235164677ea5601b896cad8d6fa4153098c40c573530fbdf63bbb652.scope - libcontainer container b0bcdff8235164677ea5601b896cad8d6fa4153098c40c573530fbdf63bbb652. Oct 8 20:17:25.748449 containerd[1488]: time="2024-10-08T20:17:25.748355878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:25.749073 containerd[1488]: time="2024-10-08T20:17:25.748775602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:25.749073 containerd[1488]: time="2024-10-08T20:17:25.749007211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:25.749417 containerd[1488]: time="2024-10-08T20:17:25.749264047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:25.785010 systemd[1]: Started cri-containerd-fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466.scope - libcontainer container fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466. Oct 8 20:17:25.827784 containerd[1488]: time="2024-10-08T20:17:25.827716239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84cb7c9cf-5qzxd,Uid:3d59cb8d-780e-4f0b-b234-ac480cf2354f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0bcdff8235164677ea5601b896cad8d6fa4153098c40c573530fbdf63bbb652\"" Oct 8 20:17:25.834469 containerd[1488]: time="2024-10-08T20:17:25.832468664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:17:25.857461 containerd[1488]: time="2024-10-08T20:17:25.857377975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5llr,Uid:fa1c824c-f769-4ea5-a6f3-ef6a6c852ec9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466\"" Oct 8 20:17:26.948537 kubelet[3000]: E1008 20:17:26.948215 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:28.949476 kubelet[3000]: E1008 20:17:28.949100 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:30.949539 kubelet[3000]: E1008 20:17:30.949134 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:32.950175 kubelet[3000]: E1008 20:17:32.948978 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:34.949958 kubelet[3000]: E1008 20:17:34.948935 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:36.949051 kubelet[3000]: E1008 20:17:36.948615 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:38.948744 kubelet[3000]: E1008 20:17:38.948382 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:40.948823 kubelet[3000]: E1008 20:17:40.948487 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:42.708838 containerd[1488]: time="2024-10-08T20:17:42.708782616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:42.709677 containerd[1488]: time="2024-10-08T20:17:42.709641672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 20:17:42.710469 containerd[1488]: time="2024-10-08T20:17:42.710425967Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:42.712024 containerd[1488]: time="2024-10-08T20:17:42.711991218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:42.712727 containerd[1488]: time="2024-10-08T20:17:42.712611703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 16.880116659s" Oct 8 20:17:42.712727 containerd[1488]: time="2024-10-08T20:17:42.712638415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 20:17:42.721169 containerd[1488]: time="2024-10-08T20:17:42.721123875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:17:42.749611 containerd[1488]: time="2024-10-08T20:17:42.749561677Z" level=info msg="CreateContainer within sandbox \"b0bcdff8235164677ea5601b896cad8d6fa4153098c40c573530fbdf63bbb652\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:17:42.762681 containerd[1488]: time="2024-10-08T20:17:42.762565570Z" level=info msg="CreateContainer within sandbox \"b0bcdff8235164677ea5601b896cad8d6fa4153098c40c573530fbdf63bbb652\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17a4e1665ed2ae5b5d989a54aa463f026a9f80ef6bfb5613d57ce8faa42449c3\"" Oct 8 20:17:42.765647 containerd[1488]: time="2024-10-08T20:17:42.765626483Z" level=info msg="StartContainer for \"17a4e1665ed2ae5b5d989a54aa463f026a9f80ef6bfb5613d57ce8faa42449c3\"" Oct 8 20:17:42.816022 systemd[1]: Started cri-containerd-17a4e1665ed2ae5b5d989a54aa463f026a9f80ef6bfb5613d57ce8faa42449c3.scope - libcontainer container 17a4e1665ed2ae5b5d989a54aa463f026a9f80ef6bfb5613d57ce8faa42449c3. Oct 8 20:17:42.859423 containerd[1488]: time="2024-10-08T20:17:42.859367624Z" level=info msg="StartContainer for \"17a4e1665ed2ae5b5d989a54aa463f026a9f80ef6bfb5613d57ce8faa42449c3\" returns successfully" Oct 8 20:17:42.949741 kubelet[3000]: E1008 20:17:42.949625 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:43.187156 kubelet[3000]: E1008 20:17:43.187094 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.187156 kubelet[3000]: W1008 20:17:43.187122 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.187156 kubelet[3000]: E1008 20:17:43.187144 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.187458 kubelet[3000]: E1008 20:17:43.187381 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.187458 kubelet[3000]: W1008 20:17:43.187391 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.187458 kubelet[3000]: E1008 20:17:43.187402 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.187660 kubelet[3000]: E1008 20:17:43.187592 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.187660 kubelet[3000]: W1008 20:17:43.187601 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.187660 kubelet[3000]: E1008 20:17:43.187611 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.187884 kubelet[3000]: E1008 20:17:43.187810 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.187884 kubelet[3000]: W1008 20:17:43.187819 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.187884 kubelet[3000]: E1008 20:17:43.187830 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.188113 kubelet[3000]: E1008 20:17:43.188082 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.188113 kubelet[3000]: W1008 20:17:43.188094 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.188113 kubelet[3000]: E1008 20:17:43.188105 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.188403 kubelet[3000]: E1008 20:17:43.188312 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.188403 kubelet[3000]: W1008 20:17:43.188321 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.188403 kubelet[3000]: E1008 20:17:43.188330 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.188645 kubelet[3000]: E1008 20:17:43.188516 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.188645 kubelet[3000]: W1008 20:17:43.188525 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.188645 kubelet[3000]: E1008 20:17:43.188534 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.188914 kubelet[3000]: E1008 20:17:43.188729 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.188914 kubelet[3000]: W1008 20:17:43.188738 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.188914 kubelet[3000]: E1008 20:17:43.188747 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.189169 kubelet[3000]: E1008 20:17:43.188963 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.189169 kubelet[3000]: W1008 20:17:43.189002 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.189169 kubelet[3000]: E1008 20:17:43.189013 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.189416 kubelet[3000]: E1008 20:17:43.189242 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.189416 kubelet[3000]: W1008 20:17:43.189254 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.189416 kubelet[3000]: E1008 20:17:43.189267 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.189913 kubelet[3000]: E1008 20:17:43.189504 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.189913 kubelet[3000]: W1008 20:17:43.189515 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.189913 kubelet[3000]: E1008 20:17:43.189525 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.189913 kubelet[3000]: E1008 20:17:43.189742 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.189913 kubelet[3000]: W1008 20:17:43.189750 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.189913 kubelet[3000]: E1008 20:17:43.189759 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.190184 kubelet[3000]: E1008 20:17:43.189993 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.190184 kubelet[3000]: W1008 20:17:43.190002 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.190184 kubelet[3000]: E1008 20:17:43.190011 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.190325 kubelet[3000]: E1008 20:17:43.190256 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.190325 kubelet[3000]: W1008 20:17:43.190265 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.190325 kubelet[3000]: E1008 20:17:43.190275 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.190493 kubelet[3000]: E1008 20:17:43.190466 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.190493 kubelet[3000]: W1008 20:17:43.190479 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.190493 kubelet[3000]: E1008 20:17:43.190488 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.206659 kubelet[3000]: E1008 20:17:43.206626 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.206659 kubelet[3000]: W1008 20:17:43.206645 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.206659 kubelet[3000]: E1008 20:17:43.206663 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.207046 kubelet[3000]: E1008 20:17:43.207018 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.207046 kubelet[3000]: W1008 20:17:43.207039 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.207211 kubelet[3000]: E1008 20:17:43.207063 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.207368 kubelet[3000]: E1008 20:17:43.207343 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.207368 kubelet[3000]: W1008 20:17:43.207362 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.207435 kubelet[3000]: E1008 20:17:43.207380 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.207676 kubelet[3000]: E1008 20:17:43.207653 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.207676 kubelet[3000]: W1008 20:17:43.207670 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.207787 kubelet[3000]: E1008 20:17:43.207687 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.207950 kubelet[3000]: E1008 20:17:43.207928 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.207950 kubelet[3000]: W1008 20:17:43.207944 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.208041 kubelet[3000]: E1008 20:17:43.207964 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.208209 kubelet[3000]: E1008 20:17:43.208181 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.208209 kubelet[3000]: W1008 20:17:43.208197 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.208367 kubelet[3000]: E1008 20:17:43.208347 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.208530 kubelet[3000]: E1008 20:17:43.208512 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.208530 kubelet[3000]: W1008 20:17:43.208527 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.208708 kubelet[3000]: E1008 20:17:43.208631 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.208769 kubelet[3000]: E1008 20:17:43.208748 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.208769 kubelet[3000]: W1008 20:17:43.208764 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.208903 kubelet[3000]: E1008 20:17:43.208845 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.210738 kubelet[3000]: E1008 20:17:43.210714 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.210738 kubelet[3000]: W1008 20:17:43.210731 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.210812 kubelet[3000]: E1008 20:17:43.210750 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.211103 kubelet[3000]: E1008 20:17:43.211073 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.211103 kubelet[3000]: W1008 20:17:43.211089 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.211207 kubelet[3000]: E1008 20:17:43.211117 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.211375 kubelet[3000]: E1008 20:17:43.211351 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.211375 kubelet[3000]: W1008 20:17:43.211369 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.211538 kubelet[3000]: E1008 20:17:43.211457 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.211660 kubelet[3000]: E1008 20:17:43.211634 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.211660 kubelet[3000]: W1008 20:17:43.211649 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.211794 kubelet[3000]: E1008 20:17:43.211715 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.211937 kubelet[3000]: E1008 20:17:43.211904 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.211937 kubelet[3000]: W1008 20:17:43.211927 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.212030 kubelet[3000]: E1008 20:17:43.211945 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.212186 kubelet[3000]: E1008 20:17:43.212171 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.212186 kubelet[3000]: W1008 20:17:43.212183 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.212330 kubelet[3000]: E1008 20:17:43.212198 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.212893 kubelet[3000]: E1008 20:17:43.212846 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.212893 kubelet[3000]: W1008 20:17:43.212882 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.212995 kubelet[3000]: E1008 20:17:43.212910 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.213230 kubelet[3000]: E1008 20:17:43.213212 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.213230 kubelet[3000]: W1008 20:17:43.213226 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.213324 kubelet[3000]: E1008 20:17:43.213241 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.213609 kubelet[3000]: E1008 20:17:43.213587 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.213609 kubelet[3000]: W1008 20:17:43.213602 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.213714 kubelet[3000]: E1008 20:17:43.213616 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.213903 kubelet[3000]: E1008 20:17:43.213852 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:43.213903 kubelet[3000]: W1008 20:17:43.213898 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:43.213990 kubelet[3000]: E1008 20:17:43.213913 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:43.728897 kubelet[3000]: I1008 20:17:43.727098 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84cb7c9cf-5qzxd" podStartSLOduration=1.838285476 podStartE2EDuration="18.727081272s" podCreationTimestamp="2024-10-08 20:17:25 +0000 UTC" firstStartedPulling="2024-10-08 20:17:25.832114424 +0000 UTC m=+21.003698802" lastFinishedPulling="2024-10-08 20:17:42.72091022 +0000 UTC m=+37.892494598" observedRunningTime="2024-10-08 20:17:43.106220344 +0000 UTC m=+38.277804732" watchObservedRunningTime="2024-10-08 20:17:43.727081272 +0000 UTC m=+38.898665650" Oct 8 20:17:44.095727 kubelet[3000]: E1008 20:17:44.095680 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.095727 kubelet[3000]: W1008 20:17:44.095714 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.096410 kubelet[3000]: E1008 20:17:44.095740 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.096410 kubelet[3000]: E1008 20:17:44.096058 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.096410 kubelet[3000]: W1008 20:17:44.096072 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.096410 kubelet[3000]: E1008 20:17:44.096087 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.096410 kubelet[3000]: E1008 20:17:44.096375 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.096410 kubelet[3000]: W1008 20:17:44.096387 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.096410 kubelet[3000]: E1008 20:17:44.096400 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.096729 kubelet[3000]: E1008 20:17:44.096705 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.096729 kubelet[3000]: W1008 20:17:44.096718 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.096812 kubelet[3000]: E1008 20:17:44.096731 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.097075 kubelet[3000]: E1008 20:17:44.097040 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.097075 kubelet[3000]: W1008 20:17:44.097057 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.097075 kubelet[3000]: E1008 20:17:44.097070 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.097355 kubelet[3000]: E1008 20:17:44.097334 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.097355 kubelet[3000]: W1008 20:17:44.097349 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.097457 kubelet[3000]: E1008 20:17:44.097361 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.097641 kubelet[3000]: E1008 20:17:44.097609 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.097641 kubelet[3000]: W1008 20:17:44.097630 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.097641 kubelet[3000]: E1008 20:17:44.097642 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.097978 kubelet[3000]: E1008 20:17:44.097957 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.097978 kubelet[3000]: W1008 20:17:44.097972 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.098084 kubelet[3000]: E1008 20:17:44.098007 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.098301 kubelet[3000]: E1008 20:17:44.098277 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.098301 kubelet[3000]: W1008 20:17:44.098295 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.098393 kubelet[3000]: E1008 20:17:44.098310 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.098592 kubelet[3000]: E1008 20:17:44.098571 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.098592 kubelet[3000]: W1008 20:17:44.098586 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.098689 kubelet[3000]: E1008 20:17:44.098599 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.099011 kubelet[3000]: E1008 20:17:44.098888 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.099011 kubelet[3000]: W1008 20:17:44.098902 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.099011 kubelet[3000]: E1008 20:17:44.098936 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.099359 kubelet[3000]: E1008 20:17:44.099331 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.099359 kubelet[3000]: W1008 20:17:44.099346 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.099359 kubelet[3000]: E1008 20:17:44.099357 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.099609 kubelet[3000]: E1008 20:17:44.099583 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.099609 kubelet[3000]: W1008 20:17:44.099596 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.099609 kubelet[3000]: E1008 20:17:44.099606 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.099835 kubelet[3000]: E1008 20:17:44.099806 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.099835 kubelet[3000]: W1008 20:17:44.099823 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.099835 kubelet[3000]: E1008 20:17:44.099833 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.100147 kubelet[3000]: E1008 20:17:44.100089 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.100147 kubelet[3000]: W1008 20:17:44.100100 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.100147 kubelet[3000]: E1008 20:17:44.100111 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.113636 kubelet[3000]: E1008 20:17:44.113604 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.113636 kubelet[3000]: W1008 20:17:44.113623 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.113636 kubelet[3000]: E1008 20:17:44.113640 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.113965 kubelet[3000]: E1008 20:17:44.113950 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.113965 kubelet[3000]: W1008 20:17:44.113964 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.114044 kubelet[3000]: E1008 20:17:44.113992 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.114274 kubelet[3000]: E1008 20:17:44.114245 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.114274 kubelet[3000]: W1008 20:17:44.114255 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.114274 kubelet[3000]: E1008 20:17:44.114270 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.114917 kubelet[3000]: E1008 20:17:44.114564 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.114917 kubelet[3000]: W1008 20:17:44.114576 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.114917 kubelet[3000]: E1008 20:17:44.114598 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.114917 kubelet[3000]: E1008 20:17:44.114843 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.114917 kubelet[3000]: W1008 20:17:44.114880 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.115082 kubelet[3000]: E1008 20:17:44.114899 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.115178 kubelet[3000]: E1008 20:17:44.115154 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.115178 kubelet[3000]: W1008 20:17:44.115170 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.115374 kubelet[3000]: E1008 20:17:44.115287 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.115426 kubelet[3000]: E1008 20:17:44.115411 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.115426 kubelet[3000]: W1008 20:17:44.115419 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.115524 kubelet[3000]: E1008 20:17:44.115508 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.115649 kubelet[3000]: E1008 20:17:44.115625 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.115649 kubelet[3000]: W1008 20:17:44.115638 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.115773 kubelet[3000]: E1008 20:17:44.115745 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.116389 kubelet[3000]: E1008 20:17:44.115978 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.116389 kubelet[3000]: W1008 20:17:44.115991 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.116389 kubelet[3000]: E1008 20:17:44.116006 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.116573 kubelet[3000]: E1008 20:17:44.116557 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.116573 kubelet[3000]: W1008 20:17:44.116571 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.116632 kubelet[3000]: E1008 20:17:44.116586 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.116854 kubelet[3000]: E1008 20:17:44.116797 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.116854 kubelet[3000]: W1008 20:17:44.116811 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.116854 kubelet[3000]: E1008 20:17:44.116830 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.117081 kubelet[3000]: E1008 20:17:44.117058 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.117081 kubelet[3000]: W1008 20:17:44.117072 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.117169 kubelet[3000]: E1008 20:17:44.117086 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.117331 kubelet[3000]: E1008 20:17:44.117305 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.117331 kubelet[3000]: W1008 20:17:44.117319 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.117410 kubelet[3000]: E1008 20:17:44.117338 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.117619 kubelet[3000]: E1008 20:17:44.117590 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.117619 kubelet[3000]: W1008 20:17:44.117604 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.117619 kubelet[3000]: E1008 20:17:44.117621 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.118050 kubelet[3000]: E1008 20:17:44.118032 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.118050 kubelet[3000]: W1008 20:17:44.118046 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.118448 kubelet[3000]: E1008 20:17:44.118153 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.118448 kubelet[3000]: E1008 20:17:44.118329 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.118448 kubelet[3000]: W1008 20:17:44.118343 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.118448 kubelet[3000]: E1008 20:17:44.118357 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.118743 kubelet[3000]: E1008 20:17:44.118718 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.118743 kubelet[3000]: W1008 20:17:44.118735 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.118743 kubelet[3000]: E1008 20:17:44.118746 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.119306 kubelet[3000]: E1008 20:17:44.119277 3000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:17:44.119306 kubelet[3000]: W1008 20:17:44.119297 3000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:17:44.119306 kubelet[3000]: E1008 20:17:44.119308 3000 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:17:44.296141 containerd[1488]: time="2024-10-08T20:17:44.296079469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 20:17:44.311005 containerd[1488]: time="2024-10-08T20:17:44.310935897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:44.312774 containerd[1488]: time="2024-10-08T20:17:44.312718180Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:44.313487 containerd[1488]: time="2024-10-08T20:17:44.313295021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:17:44.314418 containerd[1488]: time="2024-10-08T20:17:44.313986350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.592820397s" Oct 8 20:17:44.314418 containerd[1488]: time="2024-10-08T20:17:44.314016187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 20:17:44.316777 containerd[1488]: time="2024-10-08T20:17:44.316740493Z" level=info msg="CreateContainer within sandbox \"fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:17:44.328676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204342881.mount: Deactivated successfully. Oct 8 20:17:44.342667 containerd[1488]: time="2024-10-08T20:17:44.342604775Z" level=info msg="CreateContainer within sandbox \"fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d\"" Oct 8 20:17:44.343987 containerd[1488]: time="2024-10-08T20:17:44.343106604Z" level=info msg="StartContainer for \"1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d\"" Oct 8 20:17:44.371598 systemd[1]: run-containerd-runc-k8s.io-1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d-runc.3hc4qP.mount: Deactivated successfully. Oct 8 20:17:44.377976 systemd[1]: Started cri-containerd-1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d.scope - libcontainer container 1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d. Oct 8 20:17:44.405237 containerd[1488]: time="2024-10-08T20:17:44.405193574Z" level=info msg="StartContainer for \"1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d\" returns successfully" Oct 8 20:17:44.423598 systemd[1]: cri-containerd-1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d.scope: Deactivated successfully. Oct 8 20:17:44.508162 containerd[1488]: time="2024-10-08T20:17:44.476411864Z" level=info msg="shim disconnected" id=1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d namespace=k8s.io Oct 8 20:17:44.508162 containerd[1488]: time="2024-10-08T20:17:44.508154543Z" level=warning msg="cleaning up after shim disconnected" id=1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d namespace=k8s.io Oct 8 20:17:44.508162 containerd[1488]: time="2024-10-08T20:17:44.508169491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:44.948758 kubelet[3000]: E1008 20:17:44.948493 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:45.094938 containerd[1488]: time="2024-10-08T20:17:45.094065767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:17:45.325101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cddf967c17b4732a67c02f97bc72bf0b137402ccc01096da6c4b7ac3ce52d9d-rootfs.mount: Deactivated successfully. Oct 8 20:17:46.274496 update_engine[1475]: I20241008 20:17:46.274431 1475 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 20:17:46.274496 update_engine[1475]: I20241008 20:17:46.274485 1475 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 20:17:46.274986 update_engine[1475]: I20241008 20:17:46.274717 1475 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 20:17:46.275394 update_engine[1475]: I20241008 20:17:46.275359 1475 omaha_request_params.cc:62] Current group set to beta Oct 8 20:17:46.276786 update_engine[1475]: I20241008 20:17:46.276432 1475 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 20:17:46.276786 update_engine[1475]: I20241008 20:17:46.276458 1475 update_attempter.cc:643] Scheduling an action processor start. Oct 8 20:17:46.276786 update_engine[1475]: I20241008 20:17:46.276477 1475 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:17:46.276786 update_engine[1475]: I20241008 20:17:46.276513 1475 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 20:17:46.276786 update_engine[1475]: I20241008 20:17:46.276585 1475 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:17:46.276786 update_engine[1475]: I20241008 20:17:46.276597 1475 omaha_request_action.cc:272] Request: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: Oct 8 20:17:46.276786 update_engine[1475]: I20241008 20:17:46.276605 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:17:46.277258 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 20:17:46.280428 update_engine[1475]: I20241008 20:17:46.280391 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:17:46.280793 update_engine[1475]: I20241008 20:17:46.280742 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:17:46.283033 update_engine[1475]: E20241008 20:17:46.282996 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:17:46.283083 update_engine[1475]: I20241008 20:17:46.283066 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 20:17:46.949680 kubelet[3000]: E1008 20:17:46.948557 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:48.949555 kubelet[3000]: E1008 20:17:48.949120 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:50.949686 kubelet[3000]: E1008 20:17:50.948563 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:52.949539 kubelet[3000]: E1008 20:17:52.948172 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:54.949682 kubelet[3000]: E1008 20:17:54.948797 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:56.282450 update_engine[1475]: I20241008 20:17:56.281920 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:17:56.282450 update_engine[1475]: I20241008 20:17:56.282232 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:17:56.282909 update_engine[1475]: I20241008 20:17:56.282493 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:17:56.283145 update_engine[1475]: E20241008 20:17:56.283043 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:17:56.283145 update_engine[1475]: I20241008 20:17:56.283087 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 20:17:56.949342 kubelet[3000]: E1008 20:17:56.948758 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:17:58.949049 kubelet[3000]: E1008 20:17:58.948735 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:00.949002 kubelet[3000]: E1008 20:18:00.948601 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:02.949933 kubelet[3000]: E1008 20:18:02.948703 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:04.949888 kubelet[3000]: E1008 20:18:04.948758 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:06.282233 update_engine[1475]: I20241008 20:18:06.282145 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:18:06.282779 update_engine[1475]: I20241008 20:18:06.282411 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:18:06.282779 update_engine[1475]: I20241008 20:18:06.282614 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:18:06.283233 update_engine[1475]: E20241008 20:18:06.283197 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:18:06.283284 update_engine[1475]: I20241008 20:18:06.283254 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 8 20:18:06.949485 kubelet[3000]: E1008 20:18:06.948530 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:08.949271 kubelet[3000]: E1008 20:18:08.949239 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:09.061921 containerd[1488]: time="2024-10-08T20:18:09.061819311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:09.062808 containerd[1488]: time="2024-10-08T20:18:09.062684559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 20:18:09.063650 containerd[1488]: time="2024-10-08T20:18:09.063601924Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:09.065491 containerd[1488]: time="2024-10-08T20:18:09.065453730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:09.066411 containerd[1488]: time="2024-10-08T20:18:09.066012737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 23.971088957s" Oct 8 20:18:09.066411 containerd[1488]: time="2024-10-08T20:18:09.066045870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 20:18:09.078795 containerd[1488]: time="2024-10-08T20:18:09.078757256Z" level=info msg="CreateContainer within sandbox \"fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:18:09.107834 containerd[1488]: time="2024-10-08T20:18:09.107774590Z" level=info msg="CreateContainer within sandbox \"fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40\"" Oct 8 20:18:09.108452 containerd[1488]: time="2024-10-08T20:18:09.108357314Z" level=info msg="StartContainer for \"8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40\"" Oct 8 20:18:09.161625 systemd[1]: run-containerd-runc-k8s.io-8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40-runc.Ze7gJj.mount: Deactivated successfully. Oct 8 20:18:09.171970 systemd[1]: Started cri-containerd-8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40.scope - libcontainer container 8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40. Oct 8 20:18:09.203066 containerd[1488]: time="2024-10-08T20:18:09.202848045Z" level=info msg="StartContainer for \"8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40\" returns successfully" Oct 8 20:18:09.603836 systemd[1]: cri-containerd-8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40.scope: Deactivated successfully. Oct 8 20:18:09.665804 containerd[1488]: time="2024-10-08T20:18:09.665738228Z" level=info msg="shim disconnected" id=8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40 namespace=k8s.io Oct 8 20:18:09.665804 containerd[1488]: time="2024-10-08T20:18:09.665792951Z" level=warning msg="cleaning up after shim disconnected" id=8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40 namespace=k8s.io Oct 8 20:18:09.665804 containerd[1488]: time="2024-10-08T20:18:09.665802229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:18:09.677708 kubelet[3000]: I1008 20:18:09.677564 3000 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:18:09.704020 kubelet[3000]: I1008 20:18:09.702546 3000 topology_manager.go:215] "Topology Admit Handler" podUID="31814079-f47e-4140-b45f-cd80723bed1a" podNamespace="calico-system" podName="calico-kube-controllers-644c5654d4-wjcv2" Oct 8 20:18:09.706792 kubelet[3000]: I1008 20:18:09.705437 3000 topology_manager.go:215] "Topology Admit Handler" podUID="bef65186-c352-4e08-879d-0e3da7a23403" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ws68t" Oct 8 20:18:09.707880 kubelet[3000]: I1008 20:18:09.707532 3000 topology_manager.go:215] "Topology Admit Handler" podUID="95277e3b-aade-4af2-a7da-973db1d8a038" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zzw2r" Oct 8 20:18:09.715463 systemd[1]: Created slice kubepods-besteffort-pod31814079_f47e_4140_b45f_cd80723bed1a.slice - libcontainer container kubepods-besteffort-pod31814079_f47e_4140_b45f_cd80723bed1a.slice. Oct 8 20:18:09.722767 systemd[1]: Created slice kubepods-burstable-podbef65186_c352_4e08_879d_0e3da7a23403.slice - libcontainer container kubepods-burstable-podbef65186_c352_4e08_879d_0e3da7a23403.slice. Oct 8 20:18:09.731551 systemd[1]: Created slice kubepods-burstable-pod95277e3b_aade_4af2_a7da_973db1d8a038.slice - libcontainer container kubepods-burstable-pod95277e3b_aade_4af2_a7da_973db1d8a038.slice. Oct 8 20:18:09.793163 kubelet[3000]: I1008 20:18:09.793118 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31814079-f47e-4140-b45f-cd80723bed1a-tigera-ca-bundle\") pod \"calico-kube-controllers-644c5654d4-wjcv2\" (UID: \"31814079-f47e-4140-b45f-cd80723bed1a\") " pod="calico-system/calico-kube-controllers-644c5654d4-wjcv2" Oct 8 20:18:09.793396 kubelet[3000]: I1008 20:18:09.793355 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bef65186-c352-4e08-879d-0e3da7a23403-config-volume\") pod \"coredns-7db6d8ff4d-ws68t\" (UID: \"bef65186-c352-4e08-879d-0e3da7a23403\") " pod="kube-system/coredns-7db6d8ff4d-ws68t" Oct 8 20:18:09.793396 kubelet[3000]: I1008 20:18:09.793395 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84c7\" (UniqueName: \"kubernetes.io/projected/bef65186-c352-4e08-879d-0e3da7a23403-kube-api-access-s84c7\") pod \"coredns-7db6d8ff4d-ws68t\" (UID: \"bef65186-c352-4e08-879d-0e3da7a23403\") " pod="kube-system/coredns-7db6d8ff4d-ws68t" Oct 8 20:18:09.793396 kubelet[3000]: I1008 20:18:09.793414 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kkn7\" (UniqueName: \"kubernetes.io/projected/95277e3b-aade-4af2-a7da-973db1d8a038-kube-api-access-7kkn7\") pod \"coredns-7db6d8ff4d-zzw2r\" (UID: \"95277e3b-aade-4af2-a7da-973db1d8a038\") " pod="kube-system/coredns-7db6d8ff4d-zzw2r" Oct 8 20:18:09.793959 kubelet[3000]: I1008 20:18:09.793449 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95277e3b-aade-4af2-a7da-973db1d8a038-config-volume\") pod \"coredns-7db6d8ff4d-zzw2r\" (UID: \"95277e3b-aade-4af2-a7da-973db1d8a038\") " pod="kube-system/coredns-7db6d8ff4d-zzw2r" Oct 8 20:18:09.793959 kubelet[3000]: I1008 20:18:09.793520 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxqq\" (UniqueName: \"kubernetes.io/projected/31814079-f47e-4140-b45f-cd80723bed1a-kube-api-access-rjxqq\") pod \"calico-kube-controllers-644c5654d4-wjcv2\" (UID: \"31814079-f47e-4140-b45f-cd80723bed1a\") " pod="calico-system/calico-kube-controllers-644c5654d4-wjcv2" Oct 8 20:18:10.021553 containerd[1488]: time="2024-10-08T20:18:10.021477869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644c5654d4-wjcv2,Uid:31814079-f47e-4140-b45f-cd80723bed1a,Namespace:calico-system,Attempt:0,}" Oct 8 20:18:10.029378 containerd[1488]: time="2024-10-08T20:18:10.029331171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ws68t,Uid:bef65186-c352-4e08-879d-0e3da7a23403,Namespace:kube-system,Attempt:0,}" Oct 8 20:18:10.035628 containerd[1488]: time="2024-10-08T20:18:10.035256645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzw2r,Uid:95277e3b-aade-4af2-a7da-973db1d8a038,Namespace:kube-system,Attempt:0,}" Oct 8 20:18:10.113407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ff88597efc414041e4a4628a1d705cafeefbbfc139c2fdb767a9bbccca66c40-rootfs.mount: Deactivated successfully. Oct 8 20:18:10.148392 containerd[1488]: time="2024-10-08T20:18:10.148237599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:18:10.244887 containerd[1488]: time="2024-10-08T20:18:10.244454868Z" level=error msg="Failed to destroy network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.249253 containerd[1488]: time="2024-10-08T20:18:10.245609433Z" level=error msg="Failed to destroy network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.249253 containerd[1488]: time="2024-10-08T20:18:10.249036890Z" level=error msg="encountered an error cleaning up failed sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.249253 containerd[1488]: time="2024-10-08T20:18:10.249129705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644c5654d4-wjcv2,Uid:31814079-f47e-4140-b45f-cd80723bed1a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.249879 containerd[1488]: time="2024-10-08T20:18:10.249819050Z" level=error msg="encountered an error cleaning up failed sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.250432 containerd[1488]: time="2024-10-08T20:18:10.250331390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ws68t,Uid:bef65186-c352-4e08-879d-0e3da7a23403,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.251141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950-shm.mount: Deactivated successfully. Oct 8 20:18:10.251279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43-shm.mount: Deactivated successfully. Oct 8 20:18:10.261048 kubelet[3000]: E1008 20:18:10.250349 3000 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.261600 kubelet[3000]: E1008 20:18:10.257715 3000 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.261704 kubelet[3000]: E1008 20:18:10.261686 3000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ws68t" Oct 8 20:18:10.261805 kubelet[3000]: E1008 20:18:10.261791 3000 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ws68t" Oct 8 20:18:10.261983 kubelet[3000]: E1008 20:18:10.261921 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ws68t_kube-system(bef65186-c352-4e08-879d-0e3da7a23403)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ws68t_kube-system(bef65186-c352-4e08-879d-0e3da7a23403)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ws68t" podUID="bef65186-c352-4e08-879d-0e3da7a23403" Oct 8 20:18:10.262442 kubelet[3000]: E1008 20:18:10.261569 3000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-644c5654d4-wjcv2" Oct 8 20:18:10.262610 kubelet[3000]: E1008 20:18:10.262556 3000 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-644c5654d4-wjcv2" Oct 8 20:18:10.262828 kubelet[3000]: E1008 20:18:10.262768 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-644c5654d4-wjcv2_calico-system(31814079-f47e-4140-b45f-cd80723bed1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-644c5654d4-wjcv2_calico-system(31814079-f47e-4140-b45f-cd80723bed1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-644c5654d4-wjcv2" podUID="31814079-f47e-4140-b45f-cd80723bed1a" Oct 8 20:18:10.263546 containerd[1488]: time="2024-10-08T20:18:10.263517204Z" level=error msg="Failed to destroy network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.265198 containerd[1488]: time="2024-10-08T20:18:10.264799591Z" level=error msg="encountered an error cleaning up failed sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.265198 containerd[1488]: time="2024-10-08T20:18:10.265039104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzw2r,Uid:95277e3b-aade-4af2-a7da-973db1d8a038,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.265896 kubelet[3000]: E1008 20:18:10.265477 3000 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:10.265896 kubelet[3000]: E1008 20:18:10.265509 3000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zzw2r" Oct 8 20:18:10.265896 kubelet[3000]: E1008 20:18:10.265523 3000 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zzw2r" Oct 8 20:18:10.266986 kubelet[3000]: E1008 20:18:10.265551 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zzw2r_kube-system(95277e3b-aade-4af2-a7da-973db1d8a038)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zzw2r_kube-system(95277e3b-aade-4af2-a7da-973db1d8a038)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zzw2r" podUID="95277e3b-aade-4af2-a7da-973db1d8a038" Oct 8 20:18:10.267491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905-shm.mount: Deactivated successfully. Oct 8 20:18:10.956680 systemd[1]: Created slice kubepods-besteffort-pod4acbfc0b_c482_45b1_9dfd_be4ca5e86826.slice - libcontainer container kubepods-besteffort-pod4acbfc0b_c482_45b1_9dfd_be4ca5e86826.slice. Oct 8 20:18:10.959793 containerd[1488]: time="2024-10-08T20:18:10.959733639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nggfm,Uid:4acbfc0b-c482-45b1-9dfd-be4ca5e86826,Namespace:calico-system,Attempt:0,}" Oct 8 20:18:11.029777 containerd[1488]: time="2024-10-08T20:18:11.029724495Z" level=error msg="Failed to destroy network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.030117 containerd[1488]: time="2024-10-08T20:18:11.030085499Z" level=error msg="encountered an error cleaning up failed sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.030171 containerd[1488]: time="2024-10-08T20:18:11.030147226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nggfm,Uid:4acbfc0b-c482-45b1-9dfd-be4ca5e86826,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.030496 kubelet[3000]: E1008 20:18:11.030430 3000 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.030614 kubelet[3000]: E1008 20:18:11.030504 3000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nggfm" Oct 8 20:18:11.030614 kubelet[3000]: E1008 20:18:11.030526 3000 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nggfm" Oct 8 20:18:11.030614 kubelet[3000]: E1008 20:18:11.030573 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nggfm_calico-system(4acbfc0b-c482-45b1-9dfd-be4ca5e86826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nggfm_calico-system(4acbfc0b-c482-45b1-9dfd-be4ca5e86826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:11.104520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d-shm.mount: Deactivated successfully. Oct 8 20:18:11.146324 kubelet[3000]: I1008 20:18:11.146229 3000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:18:11.148953 kubelet[3000]: I1008 20:18:11.147709 3000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:18:11.162554 kubelet[3000]: I1008 20:18:11.162507 3000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:18:11.164227 kubelet[3000]: I1008 20:18:11.163767 3000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:18:11.180638 containerd[1488]: time="2024-10-08T20:18:11.179234059Z" level=info msg="StopPodSandbox for \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\"" Oct 8 20:18:11.181251 containerd[1488]: time="2024-10-08T20:18:11.181187856Z" level=info msg="StopPodSandbox for \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\"" Oct 8 20:18:11.181848 containerd[1488]: time="2024-10-08T20:18:11.181789305Z" level=info msg="Ensure that sandbox cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d in task-service has been cleanup successfully" Oct 8 20:18:11.182098 containerd[1488]: time="2024-10-08T20:18:11.182063864Z" level=info msg="Ensure that sandbox ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43 in task-service has been cleanup successfully" Oct 8 20:18:11.183640 containerd[1488]: time="2024-10-08T20:18:11.183604129Z" level=info msg="StopPodSandbox for \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\"" Oct 8 20:18:11.184072 containerd[1488]: time="2024-10-08T20:18:11.184048651Z" level=info msg="StopPodSandbox for \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\"" Oct 8 20:18:11.184323 containerd[1488]: time="2024-10-08T20:18:11.184304103Z" level=info msg="Ensure that sandbox d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905 in task-service has been cleanup successfully" Oct 8 20:18:11.184663 containerd[1488]: time="2024-10-08T20:18:11.184617006Z" level=info msg="Ensure that sandbox 1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950 in task-service has been cleanup successfully" Oct 8 20:18:11.229741 containerd[1488]: time="2024-10-08T20:18:11.229593152Z" level=error msg="StopPodSandbox for \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\" failed" error="failed to destroy network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.230100 kubelet[3000]: E1008 20:18:11.230061 3000 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:18:11.230809 kubelet[3000]: E1008 20:18:11.230755 3000 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905"} Oct 8 20:18:11.231935 kubelet[3000]: E1008 20:18:11.230822 3000 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95277e3b-aade-4af2-a7da-973db1d8a038\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:18:11.231935 kubelet[3000]: E1008 20:18:11.230884 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95277e3b-aade-4af2-a7da-973db1d8a038\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zzw2r" podUID="95277e3b-aade-4af2-a7da-973db1d8a038" Oct 8 20:18:11.239112 containerd[1488]: time="2024-10-08T20:18:11.239070237Z" level=error msg="StopPodSandbox for \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\" failed" error="failed to destroy network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.239250 kubelet[3000]: E1008 20:18:11.239218 3000 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:18:11.239250 kubelet[3000]: E1008 20:18:11.239249 3000 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43"} Oct 8 20:18:11.239321 kubelet[3000]: E1008 20:18:11.239273 3000 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31814079-f47e-4140-b45f-cd80723bed1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:18:11.239321 kubelet[3000]: E1008 20:18:11.239295 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31814079-f47e-4140-b45f-cd80723bed1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-644c5654d4-wjcv2" podUID="31814079-f47e-4140-b45f-cd80723bed1a" Oct 8 20:18:11.242533 containerd[1488]: time="2024-10-08T20:18:11.242497363Z" level=error msg="StopPodSandbox for \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\" failed" error="failed to destroy network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.242636 kubelet[3000]: E1008 20:18:11.242604 3000 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:18:11.242750 kubelet[3000]: E1008 20:18:11.242635 3000 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d"} Oct 8 20:18:11.242750 kubelet[3000]: E1008 20:18:11.242662 3000 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4acbfc0b-c482-45b1-9dfd-be4ca5e86826\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:18:11.242750 kubelet[3000]: E1008 20:18:11.242680 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4acbfc0b-c482-45b1-9dfd-be4ca5e86826\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nggfm" podUID="4acbfc0b-c482-45b1-9dfd-be4ca5e86826" Oct 8 20:18:11.244032 containerd[1488]: time="2024-10-08T20:18:11.243976182Z" level=error msg="StopPodSandbox for \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\" failed" error="failed to destroy network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:18:11.244299 kubelet[3000]: E1008 20:18:11.244271 3000 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:18:11.244695 kubelet[3000]: E1008 20:18:11.244319 3000 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950"} Oct 8 20:18:11.244695 kubelet[3000]: E1008 20:18:11.244340 3000 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bef65186-c352-4e08-879d-0e3da7a23403\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:18:11.244695 kubelet[3000]: E1008 20:18:11.244357 3000 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bef65186-c352-4e08-879d-0e3da7a23403\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ws68t" podUID="bef65186-c352-4e08-879d-0e3da7a23403" Oct 8 20:18:15.988015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059626495.mount: Deactivated successfully. Oct 8 20:18:16.074648 containerd[1488]: time="2024-10-08T20:18:16.064701803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 20:18:16.089944 containerd[1488]: time="2024-10-08T20:18:16.088520558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.931684075s" Oct 8 20:18:16.089944 containerd[1488]: time="2024-10-08T20:18:16.088571464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 20:18:16.098941 containerd[1488]: time="2024-10-08T20:18:16.098631683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:16.117292 containerd[1488]: time="2024-10-08T20:18:16.117066600Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:16.117928 containerd[1488]: time="2024-10-08T20:18:16.117907081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:16.162784 containerd[1488]: time="2024-10-08T20:18:16.162670273Z" level=info msg="CreateContainer within sandbox \"fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:18:16.254326 containerd[1488]: time="2024-10-08T20:18:16.254191644Z" level=info msg="CreateContainer within sandbox \"fd25b618547f4776a4ce54620a99a0bb9cef0d3934884cef0978d2a7fb003466\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a\"" Oct 8 20:18:16.255046 containerd[1488]: time="2024-10-08T20:18:16.255010243Z" level=info msg="StartContainer for \"14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a\"" Oct 8 20:18:16.288478 update_engine[1475]: I20241008 20:18:16.288344 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:18:16.290128 update_engine[1475]: I20241008 20:18:16.290095 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:18:16.290461 update_engine[1475]: I20241008 20:18:16.290325 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:18:16.291014 update_engine[1475]: E20241008 20:18:16.290974 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:18:16.291078 update_engine[1475]: I20241008 20:18:16.291044 1475 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:18:16.291078 update_engine[1475]: I20241008 20:18:16.291064 1475 omaha_request_action.cc:617] Omaha request response: Oct 8 20:18:16.291468 update_engine[1475]: E20241008 20:18:16.291137 1475 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314124 1475 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314167 1475 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314179 1475 update_attempter.cc:306] Processing Done. Oct 8 20:18:16.314422 update_engine[1475]: E20241008 20:18:16.314195 1475 update_attempter.cc:619] Update failed. Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314203 1475 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314208 1475 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314215 1475 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314328 1475 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314358 1475 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314366 1475 omaha_request_action.cc:272] Request: Oct 8 20:18:16.314422 update_engine[1475]: Oct 8 20:18:16.314422 update_engine[1475]: Oct 8 20:18:16.314422 update_engine[1475]: Oct 8 20:18:16.314422 update_engine[1475]: Oct 8 20:18:16.314422 update_engine[1475]: Oct 8 20:18:16.314422 update_engine[1475]: Oct 8 20:18:16.314422 update_engine[1475]: I20241008 20:18:16.314373 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:18:16.315245 update_engine[1475]: I20241008 20:18:16.314518 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:18:16.315245 update_engine[1475]: I20241008 20:18:16.314713 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:18:16.315290 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 8 20:18:16.316163 update_engine[1475]: E20241008 20:18:16.315941 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:18:16.316163 update_engine[1475]: I20241008 20:18:16.315982 1475 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:18:16.316163 update_engine[1475]: I20241008 20:18:16.315990 1475 omaha_request_action.cc:617] Omaha request response: Oct 8 20:18:16.316163 update_engine[1475]: I20241008 20:18:16.315997 1475 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:18:16.316163 update_engine[1475]: I20241008 20:18:16.316004 1475 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:18:16.316163 update_engine[1475]: I20241008 20:18:16.316009 1475 update_attempter.cc:306] Processing Done. Oct 8 20:18:16.316163 update_engine[1475]: I20241008 20:18:16.316015 1475 update_attempter.cc:310] Error event sent. Oct 8 20:18:16.316163 update_engine[1475]: I20241008 20:18:16.316025 1475 update_check_scheduler.cc:74] Next update check in 44m4s Oct 8 20:18:16.319884 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 8 20:18:16.348293 systemd[1]: Started cri-containerd-14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a.scope - libcontainer container 14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a. Oct 8 20:18:16.399385 containerd[1488]: time="2024-10-08T20:18:16.399337165Z" level=info msg="StartContainer for \"14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a\" returns successfully" Oct 8 20:18:16.510303 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:18:16.515527 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:18:17.213667 kubelet[3000]: I1008 20:18:17.207777 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d5llr" podStartSLOduration=1.93839275 podStartE2EDuration="52.196454064s" podCreationTimestamp="2024-10-08 20:17:25 +0000 UTC" firstStartedPulling="2024-10-08 20:17:25.859287981 +0000 UTC m=+21.030872360" lastFinishedPulling="2024-10-08 20:18:16.117349295 +0000 UTC m=+71.288933674" observedRunningTime="2024-10-08 20:18:17.195199059 +0000 UTC m=+72.366783447" watchObservedRunningTime="2024-10-08 20:18:17.196454064 +0000 UTC m=+72.368038452" Oct 8 20:18:18.118921 kernel: bpftool[4152]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:18:18.369241 systemd-networkd[1392]: vxlan.calico: Link UP Oct 8 20:18:18.369906 systemd-networkd[1392]: vxlan.calico: Gained carrier Oct 8 20:18:19.959128 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Oct 8 20:18:22.951039 containerd[1488]: time="2024-10-08T20:18:22.949521738Z" level=info msg="StopPodSandbox for \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\"" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.001 [INFO][4306] k8s.go 608: Cleaning up netns ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.002 [INFO][4306] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" iface="eth0" netns="/var/run/netns/cni-0b12a88c-b88a-cde1-34b1-80733939be83" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.003 [INFO][4306] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" iface="eth0" netns="/var/run/netns/cni-0b12a88c-b88a-cde1-34b1-80733939be83" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.004 [INFO][4306] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" iface="eth0" netns="/var/run/netns/cni-0b12a88c-b88a-cde1-34b1-80733939be83" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.004 [INFO][4306] k8s.go 615: Releasing IP address(es) ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.004 [INFO][4306] utils.go 188: Calico CNI releasing IP address ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.092 [INFO][4312] ipam_plugin.go 417: Releasing address using handleID ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.092 [INFO][4312] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.093 [INFO][4312] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.101 [WARNING][4312] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.101 [INFO][4312] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.102 [INFO][4312] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:23.108892 containerd[1488]: 2024-10-08 20:18:23.104 [INFO][4306] k8s.go 621: Teardown processing complete. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:18:23.108845 systemd[1]: run-netns-cni\x2d0b12a88c\x2db88a\x2dcde1\x2d34b1\x2d80733939be83.mount: Deactivated successfully. Oct 8 20:18:23.109991 containerd[1488]: time="2024-10-08T20:18:23.109273818Z" level=info msg="TearDown network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\" successfully" Oct 8 20:18:23.109991 containerd[1488]: time="2024-10-08T20:18:23.109303153Z" level=info msg="StopPodSandbox for \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\" returns successfully" Oct 8 20:18:23.110814 containerd[1488]: time="2024-10-08T20:18:23.110766944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzw2r,Uid:95277e3b-aade-4af2-a7da-973db1d8a038,Namespace:kube-system,Attempt:1,}" Oct 8 20:18:23.248044 systemd-networkd[1392]: calif47cae9d9bf: Link UP Oct 8 20:18:23.248461 systemd-networkd[1392]: calif47cae9d9bf: Gained carrier Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.168 [INFO][4319] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0 coredns-7db6d8ff4d- kube-system 95277e3b-aade-4af2-a7da-973db1d8a038 800 0 2024-10-08 20:17:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-f-c5c751ca26 coredns-7db6d8ff4d-zzw2r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif47cae9d9bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.169 [INFO][4319] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.204 [INFO][4330] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" HandleID="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.214 [INFO][4330] ipam_plugin.go 270: Auto assigning IP ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" HandleID="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318350), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-f-c5c751ca26", "pod":"coredns-7db6d8ff4d-zzw2r", "timestamp":"2024-10-08 20:18:23.204464641 +0000 UTC"}, Hostname:"ci-4081-1-0-f-c5c751ca26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.214 [INFO][4330] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.214 [INFO][4330] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.214 [INFO][4330] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-f-c5c751ca26' Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.216 [INFO][4330] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.224 [INFO][4330] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.228 [INFO][4330] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.229 [INFO][4330] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.231 [INFO][4330] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.232 [INFO][4330] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.233 [INFO][4330] ipam.go 1685: Creating new handle: k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29 Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.237 [INFO][4330] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.241 [INFO][4330] ipam.go 1216: Successfully claimed IPs: [192.168.19.1/26] block=192.168.19.0/26 handle="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.241 [INFO][4330] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.1/26] handle="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.241 [INFO][4330] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:23.263069 containerd[1488]: 2024-10-08 20:18:23.241 [INFO][4330] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.1/26] IPv6=[] ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" HandleID="k8s-pod-network.f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.264067 containerd[1488]: 2024-10-08 20:18:23.245 [INFO][4319] k8s.go 386: Populated endpoint ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95277e3b-aade-4af2-a7da-973db1d8a038", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"", Pod:"coredns-7db6d8ff4d-zzw2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif47cae9d9bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:23.264067 containerd[1488]: 2024-10-08 20:18:23.245 [INFO][4319] k8s.go 387: Calico CNI using IPs: [192.168.19.1/32] ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.264067 containerd[1488]: 2024-10-08 20:18:23.245 [INFO][4319] dataplane_linux.go 68: Setting the host side veth name to calif47cae9d9bf ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.264067 containerd[1488]: 2024-10-08 20:18:23.247 [INFO][4319] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.264067 containerd[1488]: 2024-10-08 20:18:23.248 [INFO][4319] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95277e3b-aade-4af2-a7da-973db1d8a038", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29", Pod:"coredns-7db6d8ff4d-zzw2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif47cae9d9bf", MAC:"1e:ac:36:d1:e5:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:23.264067 containerd[1488]: 2024-10-08 20:18:23.259 [INFO][4319] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzw2r" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:18:23.293332 containerd[1488]: time="2024-10-08T20:18:23.293188785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:18:23.293609 containerd[1488]: time="2024-10-08T20:18:23.293381009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:18:23.293609 containerd[1488]: time="2024-10-08T20:18:23.293404794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:23.293609 containerd[1488]: time="2024-10-08T20:18:23.293523659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:23.316996 systemd[1]: Started cri-containerd-f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29.scope - libcontainer container f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29. Oct 8 20:18:23.363085 containerd[1488]: time="2024-10-08T20:18:23.363039051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzw2r,Uid:95277e3b-aade-4af2-a7da-973db1d8a038,Namespace:kube-system,Attempt:1,} returns sandbox id \"f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29\"" Oct 8 20:18:23.369301 containerd[1488]: time="2024-10-08T20:18:23.369192728Z" level=info msg="CreateContainer within sandbox \"f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:18:23.389539 containerd[1488]: time="2024-10-08T20:18:23.389487484Z" level=info msg="CreateContainer within sandbox \"f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52c81625a569715ec89d7437bff22c4334893a87b983c3c9f4f4157665fd5f57\"" Oct 8 20:18:23.390533 containerd[1488]: time="2024-10-08T20:18:23.390489531Z" level=info msg="StartContainer for \"52c81625a569715ec89d7437bff22c4334893a87b983c3c9f4f4157665fd5f57\"" Oct 8 20:18:23.421150 systemd[1]: Started cri-containerd-52c81625a569715ec89d7437bff22c4334893a87b983c3c9f4f4157665fd5f57.scope - libcontainer container 52c81625a569715ec89d7437bff22c4334893a87b983c3c9f4f4157665fd5f57. Oct 8 20:18:23.454584 containerd[1488]: time="2024-10-08T20:18:23.454519443Z" level=info msg="StartContainer for \"52c81625a569715ec89d7437bff22c4334893a87b983c3c9f4f4157665fd5f57\" returns successfully" Oct 8 20:18:24.213694 kubelet[3000]: I1008 20:18:24.213607 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zzw2r" podStartSLOduration=65.213584954 podStartE2EDuration="1m5.213584954s" podCreationTimestamp="2024-10-08 20:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:18:24.212358343 +0000 UTC m=+79.383942731" watchObservedRunningTime="2024-10-08 20:18:24.213584954 +0000 UTC m=+79.385169333" Oct 8 20:18:24.952879 containerd[1488]: time="2024-10-08T20:18:24.951769568Z" level=info msg="StopPodSandbox for \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\"" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.002 [INFO][4452] k8s.go 608: Cleaning up netns ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.002 [INFO][4452] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" iface="eth0" netns="/var/run/netns/cni-9722d40d-fd49-6d2f-98a5-fa2bda7d185f" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.003 [INFO][4452] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" iface="eth0" netns="/var/run/netns/cni-9722d40d-fd49-6d2f-98a5-fa2bda7d185f" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.003 [INFO][4452] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" iface="eth0" netns="/var/run/netns/cni-9722d40d-fd49-6d2f-98a5-fa2bda7d185f" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.003 [INFO][4452] k8s.go 615: Releasing IP address(es) ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.003 [INFO][4452] utils.go 188: Calico CNI releasing IP address ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.026 [INFO][4459] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.027 [INFO][4459] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.027 [INFO][4459] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.034 [WARNING][4459] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.034 [INFO][4459] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.036 [INFO][4459] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:25.040893 containerd[1488]: 2024-10-08 20:18:25.038 [INFO][4452] k8s.go 621: Teardown processing complete. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:18:25.043306 containerd[1488]: time="2024-10-08T20:18:25.042969438Z" level=info msg="TearDown network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\" successfully" Oct 8 20:18:25.043306 containerd[1488]: time="2024-10-08T20:18:25.043018610Z" level=info msg="StopPodSandbox for \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\" returns successfully" Oct 8 20:18:25.043951 containerd[1488]: time="2024-10-08T20:18:25.043841257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644c5654d4-wjcv2,Uid:31814079-f47e-4140-b45f-cd80723bed1a,Namespace:calico-system,Attempt:1,}" Oct 8 20:18:25.044531 systemd[1]: run-netns-cni\x2d9722d40d\x2dfd49\x2d6d2f\x2d98a5\x2dfa2bda7d185f.mount: Deactivated successfully. Oct 8 20:18:25.157063 systemd-networkd[1392]: cali2259f71b133: Link UP Oct 8 20:18:25.157770 systemd-networkd[1392]: cali2259f71b133: Gained carrier Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.095 [INFO][4467] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0 calico-kube-controllers-644c5654d4- calico-system 31814079-f47e-4140-b45f-cd80723bed1a 821 0 2024-10-08 20:17:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:644c5654d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-1-0-f-c5c751ca26 calico-kube-controllers-644c5654d4-wjcv2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2259f71b133 [] []}} ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.095 [INFO][4467] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.119 [INFO][4477] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" HandleID="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.126 [INFO][4477] ipam_plugin.go 270: Auto assigning IP ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" HandleID="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-f-c5c751ca26", "pod":"calico-kube-controllers-644c5654d4-wjcv2", "timestamp":"2024-10-08 20:18:25.119219776 +0000 UTC"}, Hostname:"ci-4081-1-0-f-c5c751ca26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.127 [INFO][4477] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.127 [INFO][4477] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.127 [INFO][4477] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-f-c5c751ca26' Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.129 [INFO][4477] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.133 [INFO][4477] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.137 [INFO][4477] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.138 [INFO][4477] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.140 [INFO][4477] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.140 [INFO][4477] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.141 [INFO][4477] ipam.go 1685: Creating new handle: k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20 Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.145 [INFO][4477] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.149 [INFO][4477] ipam.go 1216: Successfully claimed IPs: [192.168.19.2/26] block=192.168.19.0/26 handle="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.150 [INFO][4477] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.2/26] handle="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.150 [INFO][4477] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:25.172454 containerd[1488]: 2024-10-08 20:18:25.150 [INFO][4477] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.2/26] IPv6=[] ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" HandleID="k8s-pod-network.2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.173016 containerd[1488]: 2024-10-08 20:18:25.152 [INFO][4467] k8s.go 386: Populated endpoint ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0", GenerateName:"calico-kube-controllers-644c5654d4-", Namespace:"calico-system", SelfLink:"", UID:"31814079-f47e-4140-b45f-cd80723bed1a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644c5654d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"", Pod:"calico-kube-controllers-644c5654d4-wjcv2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2259f71b133", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:25.173016 containerd[1488]: 2024-10-08 20:18:25.152 [INFO][4467] k8s.go 387: Calico CNI using IPs: [192.168.19.2/32] ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.173016 containerd[1488]: 2024-10-08 20:18:25.152 [INFO][4467] dataplane_linux.go 68: Setting the host side veth name to cali2259f71b133 ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.173016 containerd[1488]: 2024-10-08 20:18:25.154 [INFO][4467] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.173016 containerd[1488]: 2024-10-08 20:18:25.154 [INFO][4467] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0", GenerateName:"calico-kube-controllers-644c5654d4-", Namespace:"calico-system", SelfLink:"", UID:"31814079-f47e-4140-b45f-cd80723bed1a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644c5654d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20", Pod:"calico-kube-controllers-644c5654d4-wjcv2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2259f71b133", MAC:"56:50:63:69:87:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:25.173016 containerd[1488]: 2024-10-08 20:18:25.167 [INFO][4467] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20" Namespace="calico-system" Pod="calico-kube-controllers-644c5654d4-wjcv2" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:18:25.207175 systemd-networkd[1392]: calif47cae9d9bf: Gained IPv6LL Oct 8 20:18:25.224174 containerd[1488]: time="2024-10-08T20:18:25.223835433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:18:25.224174 containerd[1488]: time="2024-10-08T20:18:25.224029250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:18:25.224174 containerd[1488]: time="2024-10-08T20:18:25.224046122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:25.224839 containerd[1488]: time="2024-10-08T20:18:25.224133658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:25.260007 systemd[1]: Started cri-containerd-2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20.scope - libcontainer container 2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20. Oct 8 20:18:25.300482 containerd[1488]: time="2024-10-08T20:18:25.300336406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644c5654d4-wjcv2,Uid:31814079-f47e-4140-b45f-cd80723bed1a,Namespace:calico-system,Attempt:1,} returns sandbox id \"2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20\"" Oct 8 20:18:25.302274 containerd[1488]: time="2024-10-08T20:18:25.302234287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:18:25.950419 containerd[1488]: time="2024-10-08T20:18:25.950376159Z" level=info msg="StopPodSandbox for \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\"" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:25.987 [INFO][4551] k8s.go 608: Cleaning up netns ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:25.987 [INFO][4551] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" iface="eth0" netns="/var/run/netns/cni-fcca641d-267f-2f6a-df3d-25039f0f5ac5" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:25.987 [INFO][4551] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" iface="eth0" netns="/var/run/netns/cni-fcca641d-267f-2f6a-df3d-25039f0f5ac5" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:25.987 [INFO][4551] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" iface="eth0" netns="/var/run/netns/cni-fcca641d-267f-2f6a-df3d-25039f0f5ac5" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:25.987 [INFO][4551] k8s.go 615: Releasing IP address(es) ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:25.987 [INFO][4551] utils.go 188: Calico CNI releasing IP address ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:26.012 [INFO][4557] ipam_plugin.go 417: Releasing address using handleID ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:26.013 [INFO][4557] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:26.013 [INFO][4557] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:26.020 [WARNING][4557] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:26.020 [INFO][4557] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:26.021 [INFO][4557] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:26.026533 containerd[1488]: 2024-10-08 20:18:26.024 [INFO][4551] k8s.go 621: Teardown processing complete. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:18:26.029943 containerd[1488]: time="2024-10-08T20:18:26.028724178Z" level=info msg="TearDown network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\" successfully" Oct 8 20:18:26.029943 containerd[1488]: time="2024-10-08T20:18:26.028775134Z" level=info msg="StopPodSandbox for \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\" returns successfully" Oct 8 20:18:26.029943 containerd[1488]: time="2024-10-08T20:18:26.029535172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nggfm,Uid:4acbfc0b-c482-45b1-9dfd-be4ca5e86826,Namespace:calico-system,Attempt:1,}" Oct 8 20:18:26.031352 systemd[1]: run-netns-cni\x2dfcca641d\x2d267f\x2d2f6a\x2ddf3d\x2d25039f0f5ac5.mount: Deactivated successfully. Oct 8 20:18:26.158960 systemd-networkd[1392]: cali8dacff08011: Link UP Oct 8 20:18:26.162362 systemd-networkd[1392]: cali8dacff08011: Gained carrier Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.078 [INFO][4563] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0 csi-node-driver- calico-system 4acbfc0b-c482-45b1-9dfd-be4ca5e86826 827 0 2024-10-08 20:17:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081-1-0-f-c5c751ca26 csi-node-driver-nggfm eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8dacff08011 [] []}} ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.078 [INFO][4563] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.105 [INFO][4574] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" HandleID="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.113 [INFO][4574] ipam_plugin.go 270: Auto assigning IP ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" HandleID="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-f-c5c751ca26", "pod":"csi-node-driver-nggfm", "timestamp":"2024-10-08 20:18:26.105618444 +0000 UTC"}, Hostname:"ci-4081-1-0-f-c5c751ca26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.113 [INFO][4574] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.113 [INFO][4574] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.113 [INFO][4574] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-f-c5c751ca26' Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.115 [INFO][4574] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.118 [INFO][4574] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.122 [INFO][4574] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.123 [INFO][4574] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.125 [INFO][4574] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.125 [INFO][4574] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.128 [INFO][4574] ipam.go 1685: Creating new handle: k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722 Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.137 [INFO][4574] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.147 [INFO][4574] ipam.go 1216: Successfully claimed IPs: [192.168.19.3/26] block=192.168.19.0/26 handle="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.147 [INFO][4574] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.3/26] handle="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.147 [INFO][4574] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:26.184810 containerd[1488]: 2024-10-08 20:18:26.147 [INFO][4574] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.3/26] IPv6=[] ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" HandleID="k8s-pod-network.9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.186321 containerd[1488]: 2024-10-08 20:18:26.151 [INFO][4563] k8s.go 386: Populated endpoint ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4acbfc0b-c482-45b1-9dfd-be4ca5e86826", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"", Pod:"csi-node-driver-nggfm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8dacff08011", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:26.186321 containerd[1488]: 2024-10-08 20:18:26.151 [INFO][4563] k8s.go 387: Calico CNI using IPs: [192.168.19.3/32] ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.186321 containerd[1488]: 2024-10-08 20:18:26.151 [INFO][4563] dataplane_linux.go 68: Setting the host side veth name to cali8dacff08011 ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.186321 containerd[1488]: 2024-10-08 20:18:26.165 [INFO][4563] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.186321 containerd[1488]: 2024-10-08 20:18:26.166 [INFO][4563] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4acbfc0b-c482-45b1-9dfd-be4ca5e86826", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722", Pod:"csi-node-driver-nggfm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8dacff08011", MAC:"da:89:f6:74:40:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:26.186321 containerd[1488]: 2024-10-08 20:18:26.181 [INFO][4563] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722" Namespace="calico-system" Pod="csi-node-driver-nggfm" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:18:26.233998 containerd[1488]: time="2024-10-08T20:18:26.233680032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:18:26.236285 containerd[1488]: time="2024-10-08T20:18:26.235625794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:18:26.236285 containerd[1488]: time="2024-10-08T20:18:26.235646353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:26.236285 containerd[1488]: time="2024-10-08T20:18:26.236092738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:26.269020 systemd[1]: Started cri-containerd-9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722.scope - libcontainer container 9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722. Oct 8 20:18:26.296719 containerd[1488]: time="2024-10-08T20:18:26.296383654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nggfm,Uid:4acbfc0b-c482-45b1-9dfd-be4ca5e86826,Namespace:calico-system,Attempt:1,} returns sandbox id \"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722\"" Oct 8 20:18:26.743284 systemd-networkd[1392]: cali2259f71b133: Gained IPv6LL Oct 8 20:18:26.962387 containerd[1488]: time="2024-10-08T20:18:26.961299057Z" level=info msg="StopPodSandbox for \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\"" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.023 [INFO][4650] k8s.go 608: Cleaning up netns ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.023 [INFO][4650] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" iface="eth0" netns="/var/run/netns/cni-93755e30-fa43-ca56-34b6-e3fb5ccea056" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.024 [INFO][4650] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" iface="eth0" netns="/var/run/netns/cni-93755e30-fa43-ca56-34b6-e3fb5ccea056" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.025 [INFO][4650] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" iface="eth0" netns="/var/run/netns/cni-93755e30-fa43-ca56-34b6-e3fb5ccea056" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.025 [INFO][4650] k8s.go 615: Releasing IP address(es) ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.025 [INFO][4650] utils.go 188: Calico CNI releasing IP address ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.060 [INFO][4656] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.060 [INFO][4656] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.060 [INFO][4656] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.065 [WARNING][4656] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.065 [INFO][4656] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.067 [INFO][4656] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:27.070946 containerd[1488]: 2024-10-08 20:18:27.068 [INFO][4650] k8s.go 621: Teardown processing complete. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:18:27.073787 containerd[1488]: time="2024-10-08T20:18:27.073751304Z" level=info msg="TearDown network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\" successfully" Oct 8 20:18:27.073787 containerd[1488]: time="2024-10-08T20:18:27.073783466Z" level=info msg="StopPodSandbox for \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\" returns successfully" Oct 8 20:18:27.073967 systemd[1]: run-netns-cni\x2d93755e30\x2dfa43\x2dca56\x2d34b6\x2de3fb5ccea056.mount: Deactivated successfully. Oct 8 20:18:27.074917 containerd[1488]: time="2024-10-08T20:18:27.074783057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ws68t,Uid:bef65186-c352-4e08-879d-0e3da7a23403,Namespace:kube-system,Attempt:1,}" Oct 8 20:18:27.191916 systemd-networkd[1392]: calie6595407832: Link UP Oct 8 20:18:27.193054 systemd-networkd[1392]: calie6595407832: Gained carrier Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.115 [INFO][4663] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0 coredns-7db6d8ff4d- kube-system bef65186-c352-4e08-879d-0e3da7a23403 835 0 2024-10-08 20:17:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-f-c5c751ca26 coredns-7db6d8ff4d-ws68t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie6595407832 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.115 [INFO][4663] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.147 [INFO][4674] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" HandleID="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.156 [INFO][4674] ipam_plugin.go 270: Auto assigning IP ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" HandleID="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-f-c5c751ca26", "pod":"coredns-7db6d8ff4d-ws68t", "timestamp":"2024-10-08 20:18:27.147621788 +0000 UTC"}, Hostname:"ci-4081-1-0-f-c5c751ca26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.156 [INFO][4674] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.156 [INFO][4674] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.157 [INFO][4674] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-f-c5c751ca26' Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.158 [INFO][4674] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.162 [INFO][4674] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.167 [INFO][4674] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.168 [INFO][4674] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.170 [INFO][4674] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.170 [INFO][4674] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.172 [INFO][4674] ipam.go 1685: Creating new handle: k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121 Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.177 [INFO][4674] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.183 [INFO][4674] ipam.go 1216: Successfully claimed IPs: [192.168.19.4/26] block=192.168.19.0/26 handle="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.183 [INFO][4674] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.4/26] handle="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.183 [INFO][4674] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:27.206299 containerd[1488]: 2024-10-08 20:18:27.183 [INFO][4674] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.4/26] IPv6=[] ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" HandleID="k8s-pod-network.cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.206780 containerd[1488]: 2024-10-08 20:18:27.187 [INFO][4663] k8s.go 386: Populated endpoint ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bef65186-c352-4e08-879d-0e3da7a23403", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"", Pod:"coredns-7db6d8ff4d-ws68t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6595407832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:27.206780 containerd[1488]: 2024-10-08 20:18:27.187 [INFO][4663] k8s.go 387: Calico CNI using IPs: [192.168.19.4/32] ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.206780 containerd[1488]: 2024-10-08 20:18:27.187 [INFO][4663] dataplane_linux.go 68: Setting the host side veth name to calie6595407832 ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.206780 containerd[1488]: 2024-10-08 20:18:27.191 [INFO][4663] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.206780 containerd[1488]: 2024-10-08 20:18:27.191 [INFO][4663] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bef65186-c352-4e08-879d-0e3da7a23403", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121", Pod:"coredns-7db6d8ff4d-ws68t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6595407832", MAC:"aa:7d:90:68:b2:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:27.206780 containerd[1488]: 2024-10-08 20:18:27.203 [INFO][4663] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ws68t" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:18:27.242638 containerd[1488]: time="2024-10-08T20:18:27.241617138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:18:27.242638 containerd[1488]: time="2024-10-08T20:18:27.241848656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:18:27.242638 containerd[1488]: time="2024-10-08T20:18:27.241905293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:27.243113 containerd[1488]: time="2024-10-08T20:18:27.242549422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:27.273007 systemd[1]: Started cri-containerd-cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121.scope - libcontainer container cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121. Oct 8 20:18:27.340211 containerd[1488]: time="2024-10-08T20:18:27.339918766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ws68t,Uid:bef65186-c352-4e08-879d-0e3da7a23403,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121\"" Oct 8 20:18:27.346304 containerd[1488]: time="2024-10-08T20:18:27.346272610Z" level=info msg="CreateContainer within sandbox \"cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:18:27.368380 containerd[1488]: time="2024-10-08T20:18:27.368023442Z" level=info msg="CreateContainer within sandbox \"cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d5fe2f14f7eccf58ec0154ef17f51366fdfffdc6e38339325a3748719647110\"" Oct 8 20:18:27.368603 containerd[1488]: time="2024-10-08T20:18:27.368573774Z" level=info msg="StartContainer for \"4d5fe2f14f7eccf58ec0154ef17f51366fdfffdc6e38339325a3748719647110\"" Oct 8 20:18:27.386500 systemd-networkd[1392]: cali8dacff08011: Gained IPv6LL Oct 8 20:18:27.412216 systemd[1]: Started cri-containerd-4d5fe2f14f7eccf58ec0154ef17f51366fdfffdc6e38339325a3748719647110.scope - libcontainer container 4d5fe2f14f7eccf58ec0154ef17f51366fdfffdc6e38339325a3748719647110. Oct 8 20:18:27.448436 containerd[1488]: time="2024-10-08T20:18:27.448322863Z" level=info msg="StartContainer for \"4d5fe2f14f7eccf58ec0154ef17f51366fdfffdc6e38339325a3748719647110\" returns successfully" Oct 8 20:18:27.942034 containerd[1488]: time="2024-10-08T20:18:27.941966206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:27.942983 containerd[1488]: time="2024-10-08T20:18:27.942880104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 20:18:27.943853 containerd[1488]: time="2024-10-08T20:18:27.943772553Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:27.945553 containerd[1488]: time="2024-10-08T20:18:27.945516835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:27.946187 containerd[1488]: time="2024-10-08T20:18:27.946045766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.643667957s" Oct 8 20:18:27.946187 containerd[1488]: time="2024-10-08T20:18:27.946075040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 20:18:27.946850 containerd[1488]: time="2024-10-08T20:18:27.946833746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:18:27.956016 containerd[1488]: time="2024-10-08T20:18:27.955977981Z" level=info msg="CreateContainer within sandbox \"2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:18:27.965745 containerd[1488]: time="2024-10-08T20:18:27.965714587Z" level=info msg="CreateContainer within sandbox \"2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f327de6a5a2aaa828c09484f4265d1706c877e602fd266c7e2dba21b647e65a9\"" Oct 8 20:18:27.966772 containerd[1488]: time="2024-10-08T20:18:27.966324110Z" level=info msg="StartContainer for \"f327de6a5a2aaa828c09484f4265d1706c877e602fd266c7e2dba21b647e65a9\"" Oct 8 20:18:27.995994 systemd[1]: Started cri-containerd-f327de6a5a2aaa828c09484f4265d1706c877e602fd266c7e2dba21b647e65a9.scope - libcontainer container f327de6a5a2aaa828c09484f4265d1706c877e602fd266c7e2dba21b647e65a9. Oct 8 20:18:28.038216 containerd[1488]: time="2024-10-08T20:18:28.038088639Z" level=info msg="StartContainer for \"f327de6a5a2aaa828c09484f4265d1706c877e602fd266c7e2dba21b647e65a9\" returns successfully" Oct 8 20:18:28.290683 kubelet[3000]: I1008 20:18:28.290588 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-644c5654d4-wjcv2" podStartSLOduration=60.645328196 podStartE2EDuration="1m3.290250937s" podCreationTimestamp="2024-10-08 20:17:25 +0000 UTC" firstStartedPulling="2024-10-08 20:18:25.301807811 +0000 UTC m=+80.473392188" lastFinishedPulling="2024-10-08 20:18:27.946730551 +0000 UTC m=+83.118314929" observedRunningTime="2024-10-08 20:18:28.290070024 +0000 UTC m=+83.461654412" watchObservedRunningTime="2024-10-08 20:18:28.290250937 +0000 UTC m=+83.461835315" Oct 8 20:18:28.291208 kubelet[3000]: I1008 20:18:28.290737 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ws68t" podStartSLOduration=69.290729382 podStartE2EDuration="1m9.290729382s" podCreationTimestamp="2024-10-08 20:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:18:28.271011689 +0000 UTC m=+83.442596086" watchObservedRunningTime="2024-10-08 20:18:28.290729382 +0000 UTC m=+83.462313791" Oct 8 20:18:28.983962 systemd-networkd[1392]: calie6595407832: Gained IPv6LL Oct 8 20:18:29.537529 containerd[1488]: time="2024-10-08T20:18:29.537474628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:29.538672 containerd[1488]: time="2024-10-08T20:18:29.538622591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 20:18:29.539589 containerd[1488]: time="2024-10-08T20:18:29.539551048Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:29.541581 containerd[1488]: time="2024-10-08T20:18:29.541459239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:29.542328 containerd[1488]: time="2024-10-08T20:18:29.542008268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.594777791s" Oct 8 20:18:29.542328 containerd[1488]: time="2024-10-08T20:18:29.542038406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 20:18:29.544079 containerd[1488]: time="2024-10-08T20:18:29.544057186Z" level=info msg="CreateContainer within sandbox \"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:18:29.571572 containerd[1488]: time="2024-10-08T20:18:29.571541458Z" level=info msg="CreateContainer within sandbox \"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8b782ff37507caf5f7b2164618a17b9403a63d33c1bbb86d8a326f4faa0c3ba2\"" Oct 8 20:18:29.572314 containerd[1488]: time="2024-10-08T20:18:29.572249107Z" level=info msg="StartContainer for \"8b782ff37507caf5f7b2164618a17b9403a63d33c1bbb86d8a326f4faa0c3ba2\"" Oct 8 20:18:29.611977 systemd[1]: Started cri-containerd-8b782ff37507caf5f7b2164618a17b9403a63d33c1bbb86d8a326f4faa0c3ba2.scope - libcontainer container 8b782ff37507caf5f7b2164618a17b9403a63d33c1bbb86d8a326f4faa0c3ba2. Oct 8 20:18:29.647935 containerd[1488]: time="2024-10-08T20:18:29.647881525Z" level=info msg="StartContainer for \"8b782ff37507caf5f7b2164618a17b9403a63d33c1bbb86d8a326f4faa0c3ba2\" returns successfully" Oct 8 20:18:29.649283 containerd[1488]: time="2024-10-08T20:18:29.649240647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:18:31.535759 containerd[1488]: time="2024-10-08T20:18:31.535675197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:31.536729 containerd[1488]: time="2024-10-08T20:18:31.536678606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 20:18:31.537624 containerd[1488]: time="2024-10-08T20:18:31.537573569Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:31.539508 containerd[1488]: time="2024-10-08T20:18:31.539457805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:31.540476 containerd[1488]: time="2024-10-08T20:18:31.540023816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.890737755s" Oct 8 20:18:31.540476 containerd[1488]: time="2024-10-08T20:18:31.540059483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 20:18:31.542203 containerd[1488]: time="2024-10-08T20:18:31.542141855Z" level=info msg="CreateContainer within sandbox \"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:18:31.556731 containerd[1488]: time="2024-10-08T20:18:31.556689816Z" level=info msg="CreateContainer within sandbox \"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fd66ba55f61d179dd531ad561fab12814106c1617e65317fcd8f54edd2b639a6\"" Oct 8 20:18:31.557228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534290217.mount: Deactivated successfully. Oct 8 20:18:31.558411 containerd[1488]: time="2024-10-08T20:18:31.557358450Z" level=info msg="StartContainer for \"fd66ba55f61d179dd531ad561fab12814106c1617e65317fcd8f54edd2b639a6\"" Oct 8 20:18:31.599066 systemd[1]: Started cri-containerd-fd66ba55f61d179dd531ad561fab12814106c1617e65317fcd8f54edd2b639a6.scope - libcontainer container fd66ba55f61d179dd531ad561fab12814106c1617e65317fcd8f54edd2b639a6. Oct 8 20:18:31.628748 containerd[1488]: time="2024-10-08T20:18:31.628679930Z" level=info msg="StartContainer for \"fd66ba55f61d179dd531ad561fab12814106c1617e65317fcd8f54edd2b639a6\" returns successfully" Oct 8 20:18:32.160752 kubelet[3000]: I1008 20:18:32.160699 3000 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:18:32.163419 kubelet[3000]: I1008 20:18:32.163393 3000 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:18:32.250803 kubelet[3000]: I1008 20:18:32.250672 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nggfm" podStartSLOduration=62.008534878 podStartE2EDuration="1m7.250653411s" podCreationTimestamp="2024-10-08 20:17:25 +0000 UTC" firstStartedPulling="2024-10-08 20:18:26.298631337 +0000 UTC m=+81.470215715" lastFinishedPulling="2024-10-08 20:18:31.54074987 +0000 UTC m=+86.712334248" observedRunningTime="2024-10-08 20:18:32.250352632 +0000 UTC m=+87.421937010" watchObservedRunningTime="2024-10-08 20:18:32.250653411 +0000 UTC m=+87.422237809" Oct 8 20:18:50.169954 kubelet[3000]: I1008 20:18:50.169896 3000 topology_manager.go:215] "Topology Admit Handler" podUID="ff2cddef-fa71-421a-b5f4-8890386ab918" podNamespace="calico-apiserver" podName="calico-apiserver-c97ff5589-8xhsk" Oct 8 20:18:50.180497 kubelet[3000]: I1008 20:18:50.180463 3000 topology_manager.go:215] "Topology Admit Handler" podUID="21b87097-269a-4318-88da-0fe671dd1c76" podNamespace="calico-apiserver" podName="calico-apiserver-c97ff5589-57f86" Oct 8 20:18:50.192537 systemd[1]: Created slice kubepods-besteffort-podff2cddef_fa71_421a_b5f4_8890386ab918.slice - libcontainer container kubepods-besteffort-podff2cddef_fa71_421a_b5f4_8890386ab918.slice. Oct 8 20:18:50.200105 systemd[1]: Created slice kubepods-besteffort-pod21b87097_269a_4318_88da_0fe671dd1c76.slice - libcontainer container kubepods-besteffort-pod21b87097_269a_4318_88da_0fe671dd1c76.slice. Oct 8 20:18:50.258877 kubelet[3000]: I1008 20:18:50.258823 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtcz\" (UniqueName: \"kubernetes.io/projected/ff2cddef-fa71-421a-b5f4-8890386ab918-kube-api-access-mbtcz\") pod \"calico-apiserver-c97ff5589-8xhsk\" (UID: \"ff2cddef-fa71-421a-b5f4-8890386ab918\") " pod="calico-apiserver/calico-apiserver-c97ff5589-8xhsk" Oct 8 20:18:50.259016 kubelet[3000]: I1008 20:18:50.258912 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff2cddef-fa71-421a-b5f4-8890386ab918-calico-apiserver-certs\") pod \"calico-apiserver-c97ff5589-8xhsk\" (UID: \"ff2cddef-fa71-421a-b5f4-8890386ab918\") " pod="calico-apiserver/calico-apiserver-c97ff5589-8xhsk" Oct 8 20:18:50.360081 kubelet[3000]: I1008 20:18:50.360014 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/21b87097-269a-4318-88da-0fe671dd1c76-calico-apiserver-certs\") pod \"calico-apiserver-c97ff5589-57f86\" (UID: \"21b87097-269a-4318-88da-0fe671dd1c76\") " pod="calico-apiserver/calico-apiserver-c97ff5589-57f86" Oct 8 20:18:50.360081 kubelet[3000]: I1008 20:18:50.360081 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrrf\" (UniqueName: \"kubernetes.io/projected/21b87097-269a-4318-88da-0fe671dd1c76-kube-api-access-8lrrf\") pod \"calico-apiserver-c97ff5589-57f86\" (UID: \"21b87097-269a-4318-88da-0fe671dd1c76\") " pod="calico-apiserver/calico-apiserver-c97ff5589-57f86" Oct 8 20:18:50.360764 kubelet[3000]: E1008 20:18:50.360280 3000 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:18:50.364799 kubelet[3000]: E1008 20:18:50.364736 3000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2cddef-fa71-421a-b5f4-8890386ab918-calico-apiserver-certs podName:ff2cddef-fa71-421a-b5f4-8890386ab918 nodeName:}" failed. No retries permitted until 2024-10-08 20:18:50.860328815 +0000 UTC m=+106.031913203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ff2cddef-fa71-421a-b5f4-8890386ab918-calico-apiserver-certs") pod "calico-apiserver-c97ff5589-8xhsk" (UID: "ff2cddef-fa71-421a-b5f4-8890386ab918") : secret "calico-apiserver-certs" not found Oct 8 20:18:50.504229 containerd[1488]: time="2024-10-08T20:18:50.504195716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c97ff5589-57f86,Uid:21b87097-269a-4318-88da-0fe671dd1c76,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:18:50.660593 systemd-networkd[1392]: calid85ab21ee51: Link UP Oct 8 20:18:50.661331 systemd-networkd[1392]: calid85ab21ee51: Gained carrier Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.576 [INFO][5009] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0 calico-apiserver-c97ff5589- calico-apiserver 21b87097-269a-4318-88da-0fe671dd1c76 958 0 2024-10-08 20:18:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c97ff5589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-f-c5c751ca26 calico-apiserver-c97ff5589-57f86 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid85ab21ee51 [] []}} ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.576 [INFO][5009] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.618 [INFO][5020] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" HandleID="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.628 [INFO][5020] ipam_plugin.go 270: Auto assigning IP ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" HandleID="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-f-c5c751ca26", "pod":"calico-apiserver-c97ff5589-57f86", "timestamp":"2024-10-08 20:18:50.618095719 +0000 UTC"}, Hostname:"ci-4081-1-0-f-c5c751ca26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.628 [INFO][5020] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.628 [INFO][5020] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.628 [INFO][5020] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-f-c5c751ca26' Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.630 [INFO][5020] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.634 [INFO][5020] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.638 [INFO][5020] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.640 [INFO][5020] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.642 [INFO][5020] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.642 [INFO][5020] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.644 [INFO][5020] ipam.go 1685: Creating new handle: k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677 Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.647 [INFO][5020] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.653 [INFO][5020] ipam.go 1216: Successfully claimed IPs: [192.168.19.5/26] block=192.168.19.0/26 handle="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.653 [INFO][5020] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.5/26] handle="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.653 [INFO][5020] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:50.698237 containerd[1488]: 2024-10-08 20:18:50.653 [INFO][5020] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.5/26] IPv6=[] ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" HandleID="k8s-pod-network.ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" Oct 8 20:18:50.703318 containerd[1488]: 2024-10-08 20:18:50.656 [INFO][5009] k8s.go 386: Populated endpoint ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0", GenerateName:"calico-apiserver-c97ff5589-", Namespace:"calico-apiserver", SelfLink:"", UID:"21b87097-269a-4318-88da-0fe671dd1c76", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 18, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c97ff5589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"", Pod:"calico-apiserver-c97ff5589-57f86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid85ab21ee51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:50.703318 containerd[1488]: 2024-10-08 20:18:50.657 [INFO][5009] k8s.go 387: Calico CNI using IPs: [192.168.19.5/32] ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" Oct 8 20:18:50.703318 containerd[1488]: 2024-10-08 20:18:50.657 [INFO][5009] dataplane_linux.go 68: Setting the host side veth name to calid85ab21ee51 ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" Oct 8 20:18:50.703318 containerd[1488]: 2024-10-08 20:18:50.662 [INFO][5009] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" Oct 8 20:18:50.703318 containerd[1488]: 2024-10-08 20:18:50.662 [INFO][5009] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0", GenerateName:"calico-apiserver-c97ff5589-", Namespace:"calico-apiserver", SelfLink:"", UID:"21b87097-269a-4318-88da-0fe671dd1c76", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 18, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c97ff5589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677", Pod:"calico-apiserver-c97ff5589-57f86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid85ab21ee51", MAC:"6e:bc:84:fe:9e:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:50.703318 containerd[1488]: 2024-10-08 20:18:50.674 [INFO][5009] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-57f86" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--57f86-eth0" Oct 8 20:18:50.730530 containerd[1488]: time="2024-10-08T20:18:50.730429838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:18:50.730530 containerd[1488]: time="2024-10-08T20:18:50.730480404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:18:50.730788 containerd[1488]: time="2024-10-08T20:18:50.730510630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:50.730788 containerd[1488]: time="2024-10-08T20:18:50.730628744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:50.756022 systemd[1]: Started cri-containerd-ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677.scope - libcontainer container ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677. Oct 8 20:18:50.799183 containerd[1488]: time="2024-10-08T20:18:50.799121339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c97ff5589-57f86,Uid:21b87097-269a-4318-88da-0fe671dd1c76,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677\"" Oct 8 20:18:50.800674 containerd[1488]: time="2024-10-08T20:18:50.800647427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:18:51.097971 containerd[1488]: time="2024-10-08T20:18:51.097796547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c97ff5589-8xhsk,Uid:ff2cddef-fa71-421a-b5f4-8890386ab918,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:18:51.209653 systemd-networkd[1392]: cali46856a86b77: Link UP Oct 8 20:18:51.210592 systemd-networkd[1392]: cali46856a86b77: Gained carrier Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.137 [INFO][5083] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0 calico-apiserver-c97ff5589- calico-apiserver ff2cddef-fa71-421a-b5f4-8890386ab918 956 0 2024-10-08 20:18:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c97ff5589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-f-c5c751ca26 calico-apiserver-c97ff5589-8xhsk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46856a86b77 [] []}} ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.137 [INFO][5083] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.174 [INFO][5095] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" HandleID="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.181 [INFO][5095] ipam_plugin.go 270: Auto assigning IP ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" HandleID="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-f-c5c751ca26", "pod":"calico-apiserver-c97ff5589-8xhsk", "timestamp":"2024-10-08 20:18:51.174306943 +0000 UTC"}, Hostname:"ci-4081-1-0-f-c5c751ca26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.181 [INFO][5095] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.181 [INFO][5095] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.181 [INFO][5095] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-f-c5c751ca26' Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.182 [INFO][5095] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.186 [INFO][5095] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.191 [INFO][5095] ipam.go 489: Trying affinity for 192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.193 [INFO][5095] ipam.go 155: Attempting to load block cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.195 [INFO][5095] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.195 [INFO][5095] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.196 [INFO][5095] ipam.go 1685: Creating new handle: k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.199 [INFO][5095] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.203 [INFO][5095] ipam.go 1216: Successfully claimed IPs: [192.168.19.6/26] block=192.168.19.0/26 handle="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.203 [INFO][5095] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.6/26] handle="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" host="ci-4081-1-0-f-c5c751ca26" Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.203 [INFO][5095] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:18:51.226785 containerd[1488]: 2024-10-08 20:18:51.203 [INFO][5095] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.6/26] IPv6=[] ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" HandleID="k8s-pod-network.21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" Oct 8 20:18:51.229074 containerd[1488]: 2024-10-08 20:18:51.206 [INFO][5083] k8s.go 386: Populated endpoint ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0", GenerateName:"calico-apiserver-c97ff5589-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff2cddef-fa71-421a-b5f4-8890386ab918", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 18, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c97ff5589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"", Pod:"calico-apiserver-c97ff5589-8xhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46856a86b77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:51.229074 containerd[1488]: 2024-10-08 20:18:51.207 [INFO][5083] k8s.go 387: Calico CNI using IPs: [192.168.19.6/32] ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" Oct 8 20:18:51.229074 containerd[1488]: 2024-10-08 20:18:51.207 [INFO][5083] dataplane_linux.go 68: Setting the host side veth name to cali46856a86b77 ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" Oct 8 20:18:51.229074 containerd[1488]: 2024-10-08 20:18:51.211 [INFO][5083] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" Oct 8 20:18:51.229074 containerd[1488]: 2024-10-08 20:18:51.211 [INFO][5083] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0", GenerateName:"calico-apiserver-c97ff5589-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff2cddef-fa71-421a-b5f4-8890386ab918", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 18, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c97ff5589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e", Pod:"calico-apiserver-c97ff5589-8xhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46856a86b77", MAC:"fa:93:bf:c0:6f:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:18:51.229074 containerd[1488]: 2024-10-08 20:18:51.222 [INFO][5083] k8s.go 500: Wrote updated endpoint to datastore ContainerID="21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e" Namespace="calico-apiserver" Pod="calico-apiserver-c97ff5589-8xhsk" WorkloadEndpoint="ci--4081--1--0--f--c5c751ca26-k8s-calico--apiserver--c97ff5589--8xhsk-eth0" Oct 8 20:18:51.254228 containerd[1488]: time="2024-10-08T20:18:51.254064626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:18:51.254973 containerd[1488]: time="2024-10-08T20:18:51.254915145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:18:51.255042 containerd[1488]: time="2024-10-08T20:18:51.254978565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:51.255355 containerd[1488]: time="2024-10-08T20:18:51.255221004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:18:51.277995 systemd[1]: Started cri-containerd-21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e.scope - libcontainer container 21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e. Oct 8 20:18:51.322043 containerd[1488]: time="2024-10-08T20:18:51.322008661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c97ff5589-8xhsk,Uid:ff2cddef-fa71-421a-b5f4-8890386ab918,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e\"" Oct 8 20:18:52.535044 systemd-networkd[1392]: calid85ab21ee51: Gained IPv6LL Oct 8 20:18:53.048913 systemd-networkd[1392]: cali46856a86b77: Gained IPv6LL Oct 8 20:18:53.573535 containerd[1488]: time="2024-10-08T20:18:53.573485084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:53.574588 containerd[1488]: time="2024-10-08T20:18:53.574548196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 20:18:53.575373 containerd[1488]: time="2024-10-08T20:18:53.575323152Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:53.577283 containerd[1488]: time="2024-10-08T20:18:53.577243707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:53.578102 containerd[1488]: time="2024-10-08T20:18:53.577742570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.777064426s" Oct 8 20:18:53.578102 containerd[1488]: time="2024-10-08T20:18:53.577771536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:18:53.579014 containerd[1488]: time="2024-10-08T20:18:53.578699341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:18:53.580891 containerd[1488]: time="2024-10-08T20:18:53.580724104Z" level=info msg="CreateContainer within sandbox \"ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:18:53.604779 containerd[1488]: time="2024-10-08T20:18:53.604738728Z" level=info msg="CreateContainer within sandbox \"ec46b4cee12696adc41a6c5f54fc712f359ad4d4dc63ddb5133a18b16c6b6677\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab91844c32e0171aebaa1f6a6d7b00f8a8bd88f53b29c64b347b5c8406be1ba7\"" Oct 8 20:18:53.606075 containerd[1488]: time="2024-10-08T20:18:53.605177018Z" level=info msg="StartContainer for \"ab91844c32e0171aebaa1f6a6d7b00f8a8bd88f53b29c64b347b5c8406be1ba7\"" Oct 8 20:18:53.638987 systemd[1]: Started cri-containerd-ab91844c32e0171aebaa1f6a6d7b00f8a8bd88f53b29c64b347b5c8406be1ba7.scope - libcontainer container ab91844c32e0171aebaa1f6a6d7b00f8a8bd88f53b29c64b347b5c8406be1ba7. Oct 8 20:18:53.675381 containerd[1488]: time="2024-10-08T20:18:53.675296400Z" level=info msg="StartContainer for \"ab91844c32e0171aebaa1f6a6d7b00f8a8bd88f53b29c64b347b5c8406be1ba7\" returns successfully" Oct 8 20:18:53.984038 containerd[1488]: time="2024-10-08T20:18:53.983929932Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:18:53.985611 containerd[1488]: time="2024-10-08T20:18:53.985564335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 20:18:53.987144 containerd[1488]: time="2024-10-08T20:18:53.987117503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 408.393585ms" Oct 8 20:18:53.987222 containerd[1488]: time="2024-10-08T20:18:53.987147631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:18:53.994043 containerd[1488]: time="2024-10-08T20:18:53.994014015Z" level=info msg="CreateContainer within sandbox \"21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:18:54.011094 containerd[1488]: time="2024-10-08T20:18:54.011062878Z" level=info msg="CreateContainer within sandbox \"21b063d0be32e659a169296827ea9640398a9f37de5e6648f19d41a213fb725e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de23628bee7189023c74599dcd96eaad50f70638a8fe3fbd529bccbdda85bbcc\"" Oct 8 20:18:54.011589 containerd[1488]: time="2024-10-08T20:18:54.011562243Z" level=info msg="StartContainer for \"de23628bee7189023c74599dcd96eaad50f70638a8fe3fbd529bccbdda85bbcc\"" Oct 8 20:18:54.042215 systemd[1]: Started cri-containerd-de23628bee7189023c74599dcd96eaad50f70638a8fe3fbd529bccbdda85bbcc.scope - libcontainer container de23628bee7189023c74599dcd96eaad50f70638a8fe3fbd529bccbdda85bbcc. Oct 8 20:18:54.086536 containerd[1488]: time="2024-10-08T20:18:54.086472191Z" level=info msg="StartContainer for \"de23628bee7189023c74599dcd96eaad50f70638a8fe3fbd529bccbdda85bbcc\" returns successfully" Oct 8 20:18:54.323507 kubelet[3000]: I1008 20:18:54.323458 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c97ff5589-8xhsk" podStartSLOduration=1.655966817 podStartE2EDuration="4.323440125s" podCreationTimestamp="2024-10-08 20:18:50 +0000 UTC" firstStartedPulling="2024-10-08 20:18:51.323494352 +0000 UTC m=+106.495078731" lastFinishedPulling="2024-10-08 20:18:53.99096766 +0000 UTC m=+109.162552039" observedRunningTime="2024-10-08 20:18:54.307495791 +0000 UTC m=+109.479080169" watchObservedRunningTime="2024-10-08 20:18:54.323440125 +0000 UTC m=+109.495024502" Oct 8 20:18:54.347943 kubelet[3000]: I1008 20:18:54.347766 3000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c97ff5589-57f86" podStartSLOduration=1.569526156 podStartE2EDuration="4.347745588s" podCreationTimestamp="2024-10-08 20:18:50 +0000 UTC" firstStartedPulling="2024-10-08 20:18:50.800317452 +0000 UTC m=+105.971901830" lastFinishedPulling="2024-10-08 20:18:53.578536885 +0000 UTC m=+108.750121262" observedRunningTime="2024-10-08 20:18:54.324602363 +0000 UTC m=+109.496186741" watchObservedRunningTime="2024-10-08 20:18:54.347745588 +0000 UTC m=+109.519329967" Oct 8 20:19:04.947883 containerd[1488]: time="2024-10-08T20:19:04.947797997Z" level=info msg="StopPodSandbox for \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\"" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.020 [WARNING][5279] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bef65186-c352-4e08-879d-0e3da7a23403", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121", Pod:"coredns-7db6d8ff4d-ws68t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6595407832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.021 [INFO][5279] k8s.go 608: Cleaning up netns ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.021 [INFO][5279] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" iface="eth0" netns="" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.021 [INFO][5279] k8s.go 615: Releasing IP address(es) ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.021 [INFO][5279] utils.go 188: Calico CNI releasing IP address ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.053 [INFO][5286] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.054 [INFO][5286] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.054 [INFO][5286] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.059 [WARNING][5286] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.060 [INFO][5286] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.062 [INFO][5286] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.069392 containerd[1488]: 2024-10-08 20:19:05.066 [INFO][5279] k8s.go 621: Teardown processing complete. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.071517 containerd[1488]: time="2024-10-08T20:19:05.069416614Z" level=info msg="TearDown network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\" successfully" Oct 8 20:19:05.071517 containerd[1488]: time="2024-10-08T20:19:05.069439446Z" level=info msg="StopPodSandbox for \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\" returns successfully" Oct 8 20:19:05.099564 containerd[1488]: time="2024-10-08T20:19:05.099345932Z" level=info msg="RemovePodSandbox for \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\"" Oct 8 20:19:05.099564 containerd[1488]: time="2024-10-08T20:19:05.099389694Z" level=info msg="Forcibly stopping sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\"" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.140 [WARNING][5304] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bef65186-c352-4e08-879d-0e3da7a23403", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"cfcaf491fe92864a452f48f2abc52122bedcdb46a4dc67be15207df4cfd5f121", Pod:"coredns-7db6d8ff4d-ws68t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6595407832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.141 [INFO][5304] k8s.go 608: Cleaning up netns ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.141 [INFO][5304] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" iface="eth0" netns="" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.141 [INFO][5304] k8s.go 615: Releasing IP address(es) ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.141 [INFO][5304] utils.go 188: Calico CNI releasing IP address ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.162 [INFO][5310] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.162 [INFO][5310] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.162 [INFO][5310] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.167 [WARNING][5310] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.167 [INFO][5310] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" HandleID="k8s-pod-network.1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--ws68t-eth0" Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.169 [INFO][5310] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.175613 containerd[1488]: 2024-10-08 20:19:05.172 [INFO][5304] k8s.go 621: Teardown processing complete. ContainerID="1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950" Oct 8 20:19:05.175613 containerd[1488]: time="2024-10-08T20:19:05.175545329Z" level=info msg="TearDown network for sandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\" successfully" Oct 8 20:19:05.196335 containerd[1488]: time="2024-10-08T20:19:05.196154679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:19:05.204129 containerd[1488]: time="2024-10-08T20:19:05.203535646Z" level=info msg="RemovePodSandbox \"1c540c2a5ec23d378b408fe911500ab505ae7703d480e3626b1b36b359dd7950\" returns successfully" Oct 8 20:19:05.205262 containerd[1488]: time="2024-10-08T20:19:05.204976584Z" level=info msg="StopPodSandbox for \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\"" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.249 [WARNING][5329] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95277e3b-aade-4af2-a7da-973db1d8a038", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29", Pod:"coredns-7db6d8ff4d-zzw2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif47cae9d9bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.249 [INFO][5329] k8s.go 608: Cleaning up netns ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.249 [INFO][5329] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" iface="eth0" netns="" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.249 [INFO][5329] k8s.go 615: Releasing IP address(es) ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.249 [INFO][5329] utils.go 188: Calico CNI releasing IP address ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.270 [INFO][5336] ipam_plugin.go 417: Releasing address using handleID ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.270 [INFO][5336] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.270 [INFO][5336] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.276 [WARNING][5336] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.276 [INFO][5336] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.278 [INFO][5336] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.282541 containerd[1488]: 2024-10-08 20:19:05.280 [INFO][5329] k8s.go 621: Teardown processing complete. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.283771 containerd[1488]: time="2024-10-08T20:19:05.282720584Z" level=info msg="TearDown network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\" successfully" Oct 8 20:19:05.283771 containerd[1488]: time="2024-10-08T20:19:05.282744990Z" level=info msg="StopPodSandbox for \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\" returns successfully" Oct 8 20:19:05.283771 containerd[1488]: time="2024-10-08T20:19:05.283318345Z" level=info msg="RemovePodSandbox for \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\"" Oct 8 20:19:05.283771 containerd[1488]: time="2024-10-08T20:19:05.283347571Z" level=info msg="Forcibly stopping sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\"" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.325 [WARNING][5354] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"95277e3b-aade-4af2-a7da-973db1d8a038", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"f2fffa51f397c9ad9ec8eccf793f0671e7e5155d6a75d53725b549726ae2fd29", Pod:"coredns-7db6d8ff4d-zzw2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif47cae9d9bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.325 [INFO][5354] k8s.go 608: Cleaning up netns ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.325 [INFO][5354] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" iface="eth0" netns="" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.325 [INFO][5354] k8s.go 615: Releasing IP address(es) ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.325 [INFO][5354] utils.go 188: Calico CNI releasing IP address ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.357 [INFO][5361] ipam_plugin.go 417: Releasing address using handleID ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.357 [INFO][5361] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.358 [INFO][5361] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.363 [WARNING][5361] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.363 [INFO][5361] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" HandleID="k8s-pod-network.d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Workload="ci--4081--1--0--f--c5c751ca26-k8s-coredns--7db6d8ff4d--zzw2r-eth0" Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.367 [INFO][5361] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.373084 containerd[1488]: 2024-10-08 20:19:05.370 [INFO][5354] k8s.go 621: Teardown processing complete. ContainerID="d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905" Oct 8 20:19:05.374166 containerd[1488]: time="2024-10-08T20:19:05.373140291Z" level=info msg="TearDown network for sandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\" successfully" Oct 8 20:19:05.379221 containerd[1488]: time="2024-10-08T20:19:05.379162237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:19:05.379440 containerd[1488]: time="2024-10-08T20:19:05.379233732Z" level=info msg="RemovePodSandbox \"d397f25cbe1b07289753412fb8c573d7b3fe8fa558c45afe0fe602c3742c2905\" returns successfully" Oct 8 20:19:05.380152 containerd[1488]: time="2024-10-08T20:19:05.380121783Z" level=info msg="StopPodSandbox for \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\"" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.428 [WARNING][5379] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4acbfc0b-c482-45b1-9dfd-be4ca5e86826", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722", Pod:"csi-node-driver-nggfm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8dacff08011", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.429 [INFO][5379] k8s.go 608: Cleaning up netns ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.429 [INFO][5379] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" iface="eth0" netns="" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.429 [INFO][5379] k8s.go 615: Releasing IP address(es) ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.429 [INFO][5379] utils.go 188: Calico CNI releasing IP address ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.467 [INFO][5385] ipam_plugin.go 417: Releasing address using handleID ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.467 [INFO][5385] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.467 [INFO][5385] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.472 [WARNING][5385] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.472 [INFO][5385] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.473 [INFO][5385] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.479121 containerd[1488]: 2024-10-08 20:19:05.476 [INFO][5379] k8s.go 621: Teardown processing complete. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.479121 containerd[1488]: time="2024-10-08T20:19:05.479057355Z" level=info msg="TearDown network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\" successfully" Oct 8 20:19:05.479121 containerd[1488]: time="2024-10-08T20:19:05.479081171Z" level=info msg="StopPodSandbox for \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\" returns successfully" Oct 8 20:19:05.480494 containerd[1488]: time="2024-10-08T20:19:05.480134925Z" level=info msg="RemovePodSandbox for \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\"" Oct 8 20:19:05.480494 containerd[1488]: time="2024-10-08T20:19:05.480182204Z" level=info msg="Forcibly stopping sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\"" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.520 [WARNING][5404] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4acbfc0b-c482-45b1-9dfd-be4ca5e86826", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"9505a4d96d4ee1e13d9c88f77295d0446e17d143388b074ebbab70d017861722", Pod:"csi-node-driver-nggfm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8dacff08011", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.521 [INFO][5404] k8s.go 608: Cleaning up netns ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.521 [INFO][5404] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" iface="eth0" netns="" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.521 [INFO][5404] k8s.go 615: Releasing IP address(es) ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.521 [INFO][5404] utils.go 188: Calico CNI releasing IP address ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.540 [INFO][5410] ipam_plugin.go 417: Releasing address using handleID ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.540 [INFO][5410] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.540 [INFO][5410] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.545 [WARNING][5410] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.545 [INFO][5410] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" HandleID="k8s-pod-network.cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Workload="ci--4081--1--0--f--c5c751ca26-k8s-csi--node--driver--nggfm-eth0" Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.547 [INFO][5410] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.552175 containerd[1488]: 2024-10-08 20:19:05.549 [INFO][5404] k8s.go 621: Teardown processing complete. ContainerID="cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d" Oct 8 20:19:05.552719 containerd[1488]: time="2024-10-08T20:19:05.552227059Z" level=info msg="TearDown network for sandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\" successfully" Oct 8 20:19:05.555946 containerd[1488]: time="2024-10-08T20:19:05.555841870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:19:05.555946 containerd[1488]: time="2024-10-08T20:19:05.555929757Z" level=info msg="RemovePodSandbox \"cd66540f61303249b0fdb4f05820a7da66a63f19b8d608aa9eb278c63624fc8d\" returns successfully" Oct 8 20:19:05.556552 containerd[1488]: time="2024-10-08T20:19:05.556506528Z" level=info msg="StopPodSandbox for \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\"" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.606 [WARNING][5429] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0", GenerateName:"calico-kube-controllers-644c5654d4-", Namespace:"calico-system", SelfLink:"", UID:"31814079-f47e-4140-b45f-cd80723bed1a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644c5654d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20", Pod:"calico-kube-controllers-644c5654d4-wjcv2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2259f71b133", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.606 [INFO][5429] k8s.go 608: Cleaning up netns ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.606 [INFO][5429] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" iface="eth0" netns="" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.606 [INFO][5429] k8s.go 615: Releasing IP address(es) ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.606 [INFO][5429] utils.go 188: Calico CNI releasing IP address ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.662 [INFO][5435] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.663 [INFO][5435] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.663 [INFO][5435] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.670 [WARNING][5435] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.670 [INFO][5435] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.673 [INFO][5435] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.682185 containerd[1488]: 2024-10-08 20:19:05.676 [INFO][5429] k8s.go 621: Teardown processing complete. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.682185 containerd[1488]: time="2024-10-08T20:19:05.682009957Z" level=info msg="TearDown network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\" successfully" Oct 8 20:19:05.682185 containerd[1488]: time="2024-10-08T20:19:05.682044682Z" level=info msg="StopPodSandbox for \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\" returns successfully" Oct 8 20:19:05.684669 containerd[1488]: time="2024-10-08T20:19:05.682658924Z" level=info msg="RemovePodSandbox for \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\"" Oct 8 20:19:05.684669 containerd[1488]: time="2024-10-08T20:19:05.682690235Z" level=info msg="Forcibly stopping sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\"" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.722 [WARNING][5453] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0", GenerateName:"calico-kube-controllers-644c5654d4-", Namespace:"calico-system", SelfLink:"", UID:"31814079-f47e-4140-b45f-cd80723bed1a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644c5654d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-f-c5c751ca26", ContainerID:"2fb727722f3ae60e493af205982b6d136c1d0e47efe1dab6cd6de7bc0213be20", Pod:"calico-kube-controllers-644c5654d4-wjcv2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2259f71b133", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.723 [INFO][5453] k8s.go 608: Cleaning up netns ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.723 [INFO][5453] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" iface="eth0" netns="" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.723 [INFO][5453] k8s.go 615: Releasing IP address(es) ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.723 [INFO][5453] utils.go 188: Calico CNI releasing IP address ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.743 [INFO][5459] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.743 [INFO][5459] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.744 [INFO][5459] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.751 [WARNING][5459] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.751 [INFO][5459] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" HandleID="k8s-pod-network.ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Workload="ci--4081--1--0--f--c5c751ca26-k8s-calico--kube--controllers--644c5654d4--wjcv2-eth0" Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.752 [INFO][5459] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:19:05.757424 containerd[1488]: 2024-10-08 20:19:05.755 [INFO][5453] k8s.go 621: Teardown processing complete. ContainerID="ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43" Oct 8 20:19:05.757806 containerd[1488]: time="2024-10-08T20:19:05.757471148Z" level=info msg="TearDown network for sandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\" successfully" Oct 8 20:19:05.766432 containerd[1488]: time="2024-10-08T20:19:05.766386269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:19:05.766637 containerd[1488]: time="2024-10-08T20:19:05.766462783Z" level=info msg="RemovePodSandbox \"ee32b2b18bc9d502c61edab82ea6e048ced96d31b03b2c6bb4838b3a8164eb43\" returns successfully" Oct 8 20:19:40.041707 systemd[1]: run-containerd-runc-k8s.io-f327de6a5a2aaa828c09484f4265d1706c877e602fd266c7e2dba21b647e65a9-runc.bP7T2I.mount: Deactivated successfully. Oct 8 20:19:48.764251 systemd[1]: run-containerd-runc-k8s.io-14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a-runc.jI6LcQ.mount: Deactivated successfully. Oct 8 20:21:48.749557 systemd[1]: run-containerd-runc-k8s.io-14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a-runc.QmwVMS.mount: Deactivated successfully. Oct 8 20:21:55.413115 systemd[1]: Started sshd@7-188.245.175.191:22-147.75.109.163:37092.service - OpenSSH per-connection server daemon (147.75.109.163:37092). Oct 8 20:21:56.438092 sshd[5918]: Accepted publickey for core from 147.75.109.163 port 37092 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:21:56.441005 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:21:56.446376 systemd-logind[1471]: New session 8 of user core. Oct 8 20:21:56.449996 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:21:57.484182 sshd[5918]: pam_unix(sshd:session): session closed for user core Oct 8 20:21:57.489394 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:21:57.490225 systemd[1]: sshd@7-188.245.175.191:22-147.75.109.163:37092.service: Deactivated successfully. Oct 8 20:21:57.493831 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:21:57.494983 systemd-logind[1471]: Removed session 8. Oct 8 20:22:02.654187 systemd[1]: Started sshd@8-188.245.175.191:22-147.75.109.163:57932.service - OpenSSH per-connection server daemon (147.75.109.163:57932). Oct 8 20:22:03.630938 sshd[5937]: Accepted publickey for core from 147.75.109.163 port 57932 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:03.632702 sshd[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:03.637730 systemd-logind[1471]: New session 9 of user core. Oct 8 20:22:03.642001 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:22:04.386777 sshd[5937]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:04.390854 systemd[1]: sshd@8-188.245.175.191:22-147.75.109.163:57932.service: Deactivated successfully. Oct 8 20:22:04.393813 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:22:04.396768 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:22:04.398541 systemd-logind[1471]: Removed session 9. Oct 8 20:22:09.562222 systemd[1]: Started sshd@9-188.245.175.191:22-147.75.109.163:49562.service - OpenSSH per-connection server daemon (147.75.109.163:49562). Oct 8 20:22:10.540159 sshd[5953]: Accepted publickey for core from 147.75.109.163 port 49562 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:10.542853 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:10.548171 systemd-logind[1471]: New session 10 of user core. Oct 8 20:22:10.552006 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:22:11.278529 sshd[5953]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:11.282428 systemd[1]: sshd@9-188.245.175.191:22-147.75.109.163:49562.service: Deactivated successfully. Oct 8 20:22:11.285731 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:22:11.287469 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:22:11.288831 systemd-logind[1471]: Removed session 10. Oct 8 20:22:16.458261 systemd[1]: Started sshd@10-188.245.175.191:22-147.75.109.163:49572.service - OpenSSH per-connection server daemon (147.75.109.163:49572). Oct 8 20:22:17.451769 sshd[6013]: Accepted publickey for core from 147.75.109.163 port 49572 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:17.453815 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:17.458931 systemd-logind[1471]: New session 11 of user core. Oct 8 20:22:17.465019 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:22:18.206365 sshd[6013]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:18.211639 systemd[1]: sshd@10-188.245.175.191:22-147.75.109.163:49572.service: Deactivated successfully. Oct 8 20:22:18.214220 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:22:18.215394 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:22:18.216587 systemd-logind[1471]: Removed session 11. Oct 8 20:22:18.369242 systemd[1]: Started sshd@11-188.245.175.191:22-147.75.109.163:54062.service - OpenSSH per-connection server daemon (147.75.109.163:54062). Oct 8 20:22:19.322079 sshd[6029]: Accepted publickey for core from 147.75.109.163 port 54062 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:19.324108 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:19.331974 systemd-logind[1471]: New session 12 of user core. Oct 8 20:22:19.338020 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:22:20.086517 sshd[6029]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:20.090763 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:22:20.091597 systemd[1]: sshd@11-188.245.175.191:22-147.75.109.163:54062.service: Deactivated successfully. Oct 8 20:22:20.094290 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:22:20.095829 systemd-logind[1471]: Removed session 12. Oct 8 20:22:20.263135 systemd[1]: Started sshd@12-188.245.175.191:22-147.75.109.163:54074.service - OpenSSH per-connection server daemon (147.75.109.163:54074). Oct 8 20:22:21.252282 sshd[6066]: Accepted publickey for core from 147.75.109.163 port 54074 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:21.253998 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:21.258989 systemd-logind[1471]: New session 13 of user core. Oct 8 20:22:21.265013 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:22:22.009262 sshd[6066]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:22.014283 systemd[1]: sshd@12-188.245.175.191:22-147.75.109.163:54074.service: Deactivated successfully. Oct 8 20:22:22.016569 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:22:22.017412 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:22:22.018464 systemd-logind[1471]: Removed session 13. Oct 8 20:22:27.179646 systemd[1]: Started sshd@13-188.245.175.191:22-147.75.109.163:54078.service - OpenSSH per-connection server daemon (147.75.109.163:54078). Oct 8 20:22:28.143221 sshd[6086]: Accepted publickey for core from 147.75.109.163 port 54078 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:28.144823 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:28.148997 systemd-logind[1471]: New session 14 of user core. Oct 8 20:22:28.156995 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:22:28.870468 sshd[6086]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:28.873971 systemd[1]: sshd@13-188.245.175.191:22-147.75.109.163:54078.service: Deactivated successfully. Oct 8 20:22:28.877025 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:22:28.878556 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:22:28.879802 systemd-logind[1471]: Removed session 14. Oct 8 20:22:29.035154 systemd[1]: Started sshd@14-188.245.175.191:22-147.75.109.163:45828.service - OpenSSH per-connection server daemon (147.75.109.163:45828). Oct 8 20:22:29.999821 sshd[6098]: Accepted publickey for core from 147.75.109.163 port 45828 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:30.001608 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:30.006806 systemd-logind[1471]: New session 15 of user core. Oct 8 20:22:30.014888 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:22:30.893628 sshd[6098]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:30.901317 systemd[1]: sshd@14-188.245.175.191:22-147.75.109.163:45828.service: Deactivated successfully. Oct 8 20:22:30.903397 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:22:30.904120 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:22:30.905141 systemd-logind[1471]: Removed session 15. Oct 8 20:22:31.061163 systemd[1]: Started sshd@15-188.245.175.191:22-147.75.109.163:45836.service - OpenSSH per-connection server daemon (147.75.109.163:45836). Oct 8 20:22:32.047013 sshd[6114]: Accepted publickey for core from 147.75.109.163 port 45836 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:32.049718 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:32.054584 systemd-logind[1471]: New session 16 of user core. Oct 8 20:22:32.058041 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:22:34.530972 sshd[6114]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:34.541588 systemd[1]: sshd@15-188.245.175.191:22-147.75.109.163:45836.service: Deactivated successfully. Oct 8 20:22:34.544585 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:22:34.545622 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:22:34.547420 systemd-logind[1471]: Removed session 16. Oct 8 20:22:34.701429 systemd[1]: Started sshd@16-188.245.175.191:22-147.75.109.163:45846.service - OpenSSH per-connection server daemon (147.75.109.163:45846). Oct 8 20:22:35.702176 sshd[6132]: Accepted publickey for core from 147.75.109.163 port 45846 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:35.703699 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:35.710718 systemd-logind[1471]: New session 17 of user core. Oct 8 20:22:35.716997 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:22:36.815536 sshd[6132]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:36.820189 systemd[1]: sshd@16-188.245.175.191:22-147.75.109.163:45846.service: Deactivated successfully. Oct 8 20:22:36.822946 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:22:36.823744 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:22:36.825376 systemd-logind[1471]: Removed session 17. Oct 8 20:22:36.990172 systemd[1]: Started sshd@17-188.245.175.191:22-147.75.109.163:45856.service - OpenSSH per-connection server daemon (147.75.109.163:45856). Oct 8 20:22:37.996667 sshd[6146]: Accepted publickey for core from 147.75.109.163 port 45856 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:37.998622 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:38.004059 systemd-logind[1471]: New session 18 of user core. Oct 8 20:22:38.009019 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:22:38.739142 sshd[6146]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:38.745830 systemd[1]: sshd@17-188.245.175.191:22-147.75.109.163:45856.service: Deactivated successfully. Oct 8 20:22:38.748249 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:22:38.749047 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:22:38.750677 systemd-logind[1471]: Removed session 18. Oct 8 20:22:43.911136 systemd[1]: Started sshd@18-188.245.175.191:22-147.75.109.163:47428.service - OpenSSH per-connection server daemon (147.75.109.163:47428). Oct 8 20:22:44.863021 sshd[6186]: Accepted publickey for core from 147.75.109.163 port 47428 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:44.864943 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:44.870215 systemd-logind[1471]: New session 19 of user core. Oct 8 20:22:44.876036 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:22:45.588622 sshd[6186]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:45.593340 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:22:45.594509 systemd[1]: sshd@18-188.245.175.191:22-147.75.109.163:47428.service: Deactivated successfully. Oct 8 20:22:45.596723 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:22:45.598444 systemd-logind[1471]: Removed session 19. Oct 8 20:22:48.747555 systemd[1]: run-containerd-runc-k8s.io-14c5b135b0bdc0a14b79d163fd4b418e9855ca4c902fb1ef5e73c168e8600b1a-runc.2yUir4.mount: Deactivated successfully. Oct 8 20:22:50.761389 systemd[1]: Started sshd@19-188.245.175.191:22-147.75.109.163:42298.service - OpenSSH per-connection server daemon (147.75.109.163:42298). Oct 8 20:22:51.755641 sshd[6222]: Accepted publickey for core from 147.75.109.163 port 42298 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:22:51.759385 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:22:51.766238 systemd-logind[1471]: New session 20 of user core. Oct 8 20:22:51.772036 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:22:52.546081 sshd[6222]: pam_unix(sshd:session): session closed for user core Oct 8 20:22:52.550236 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:22:52.551237 systemd[1]: sshd@19-188.245.175.191:22-147.75.109.163:42298.service: Deactivated successfully. Oct 8 20:22:52.553258 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:22:52.554150 systemd-logind[1471]: Removed session 20. Oct 8 20:23:09.485095 systemd[1]: cri-containerd-2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c.scope: Deactivated successfully. Oct 8 20:23:09.485750 systemd[1]: cri-containerd-2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c.scope: Consumed 4.842s CPU time, 22.9M memory peak, 0B memory swap peak. Oct 8 20:23:09.623592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c-rootfs.mount: Deactivated successfully. Oct 8 20:23:09.635359 containerd[1488]: time="2024-10-08T20:23:09.617060438Z" level=info msg="shim disconnected" id=2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c namespace=k8s.io Oct 8 20:23:09.641551 containerd[1488]: time="2024-10-08T20:23:09.641499744Z" level=warning msg="cleaning up after shim disconnected" id=2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c namespace=k8s.io Oct 8 20:23:09.641551 containerd[1488]: time="2024-10-08T20:23:09.641530345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:23:09.697987 systemd[1]: cri-containerd-19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2.scope: Deactivated successfully. Oct 8 20:23:09.698744 systemd[1]: cri-containerd-19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2.scope: Consumed 1.407s CPU time, 16.3M memory peak, 0B memory swap peak. Oct 8 20:23:09.703923 kubelet[3000]: E1008 20:23:09.703593 3000 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44310->10.0.0.2:2379: read: connection timed out" Oct 8 20:23:09.727320 containerd[1488]: time="2024-10-08T20:23:09.725628434Z" level=info msg="shim disconnected" id=19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2 namespace=k8s.io Oct 8 20:23:09.726973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2-rootfs.mount: Deactivated successfully. Oct 8 20:23:09.728164 containerd[1488]: time="2024-10-08T20:23:09.728131838Z" level=warning msg="cleaning up after shim disconnected" id=19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2 namespace=k8s.io Oct 8 20:23:09.728164 containerd[1488]: time="2024-10-08T20:23:09.728156707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:23:09.891497 systemd[1]: cri-containerd-072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853.scope: Deactivated successfully. Oct 8 20:23:09.893343 systemd[1]: cri-containerd-072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853.scope: Consumed 5.774s CPU time. Oct 8 20:23:09.918794 containerd[1488]: time="2024-10-08T20:23:09.917438949Z" level=info msg="shim disconnected" id=072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853 namespace=k8s.io Oct 8 20:23:09.918794 containerd[1488]: time="2024-10-08T20:23:09.917489768Z" level=warning msg="cleaning up after shim disconnected" id=072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853 namespace=k8s.io Oct 8 20:23:09.918794 containerd[1488]: time="2024-10-08T20:23:09.917499828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:23:09.920141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853-rootfs.mount: Deactivated successfully. Oct 8 20:23:09.952176 kubelet[3000]: I1008 20:23:09.952149 3000 scope.go:117] "RemoveContainer" containerID="2975760b3e1a9bf00e2b2ef2da290e39ce07ae02fd6f142f6e197dfb1353a20c" Oct 8 20:23:09.952614 kubelet[3000]: I1008 20:23:09.952588 3000 scope.go:117] "RemoveContainer" containerID="19b5fb2331763f7fb587ef5f1e922f71c471edd73c291575f3db834812075cf2" Oct 8 20:23:09.955085 kubelet[3000]: I1008 20:23:09.954820 3000 scope.go:117] "RemoveContainer" containerID="072068d38e5e4f6f2df425462c1637ca6c3f2529e02456bdbc8712a7d3413853" Oct 8 20:23:09.968937 containerd[1488]: time="2024-10-08T20:23:09.968904689Z" level=info msg="CreateContainer within sandbox \"2cc99bd0a10339e5782132bea6b4364399c79726e04c50da35cbe25411a7e795\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 8 20:23:09.969980 containerd[1488]: time="2024-10-08T20:23:09.969951617Z" level=info msg="CreateContainer within sandbox \"7d6e4379a3550ca5aa2e52de65726ca3a87e309de90923fde3516b361eece4e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 20:23:10.002120 containerd[1488]: time="2024-10-08T20:23:10.002078253Z" level=info msg="CreateContainer within sandbox \"9143de4a143bffe3ec6e2b493bcf0a32ad08afdc38101517bac4b1c106fb9768\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 20:23:10.025111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999996884.mount: Deactivated successfully. Oct 8 20:23:10.048538 containerd[1488]: time="2024-10-08T20:23:10.048489402Z" level=info msg="CreateContainer within sandbox \"9143de4a143bffe3ec6e2b493bcf0a32ad08afdc38101517bac4b1c106fb9768\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4dd36ea66d2a11f78fe760e7bdc18f18774607336ec67c1c6793573b6a35579f\"" Oct 8 20:23:10.052360 containerd[1488]: time="2024-10-08T20:23:10.052172864Z" level=info msg="StartContainer for \"4dd36ea66d2a11f78fe760e7bdc18f18774607336ec67c1c6793573b6a35579f\"" Oct 8 20:23:10.054233 containerd[1488]: time="2024-10-08T20:23:10.054183648Z" level=info msg="CreateContainer within sandbox \"7d6e4379a3550ca5aa2e52de65726ca3a87e309de90923fde3516b361eece4e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"efc10e3821b3bd8f68ca2278e22878e05dea33d68ab6d19d6999e0e9e039c86e\"" Oct 8 20:23:10.055201 containerd[1488]: time="2024-10-08T20:23:10.055181190Z" level=info msg="StartContainer for \"efc10e3821b3bd8f68ca2278e22878e05dea33d68ab6d19d6999e0e9e039c86e\"" Oct 8 20:23:10.059227 containerd[1488]: time="2024-10-08T20:23:10.059195463Z" level=info msg="CreateContainer within sandbox \"2cc99bd0a10339e5782132bea6b4364399c79726e04c50da35cbe25411a7e795\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3ab088cfd43ed07a9ee629e5505ef0cd3dc5bc8d8b49b2c8e76da4b6a2ce852e\"" Oct 8 20:23:10.059593 containerd[1488]: time="2024-10-08T20:23:10.059574328Z" level=info msg="StartContainer for \"3ab088cfd43ed07a9ee629e5505ef0cd3dc5bc8d8b49b2c8e76da4b6a2ce852e\"" Oct 8 20:23:10.103220 systemd[1]: Started cri-containerd-4dd36ea66d2a11f78fe760e7bdc18f18774607336ec67c1c6793573b6a35579f.scope - libcontainer container 4dd36ea66d2a11f78fe760e7bdc18f18774607336ec67c1c6793573b6a35579f. Oct 8 20:23:10.114881 systemd[1]: Started cri-containerd-3ab088cfd43ed07a9ee629e5505ef0cd3dc5bc8d8b49b2c8e76da4b6a2ce852e.scope - libcontainer container 3ab088cfd43ed07a9ee629e5505ef0cd3dc5bc8d8b49b2c8e76da4b6a2ce852e. Oct 8 20:23:10.123413 systemd[1]: Started cri-containerd-efc10e3821b3bd8f68ca2278e22878e05dea33d68ab6d19d6999e0e9e039c86e.scope - libcontainer container efc10e3821b3bd8f68ca2278e22878e05dea33d68ab6d19d6999e0e9e039c86e. Oct 8 20:23:10.193884 containerd[1488]: time="2024-10-08T20:23:10.193276520Z" level=info msg="StartContainer for \"efc10e3821b3bd8f68ca2278e22878e05dea33d68ab6d19d6999e0e9e039c86e\" returns successfully" Oct 8 20:23:10.199820 containerd[1488]: time="2024-10-08T20:23:10.199782513Z" level=info msg="StartContainer for \"4dd36ea66d2a11f78fe760e7bdc18f18774607336ec67c1c6793573b6a35579f\" returns successfully" Oct 8 20:23:10.203044 containerd[1488]: time="2024-10-08T20:23:10.202563782Z" level=info msg="StartContainer for \"3ab088cfd43ed07a9ee629e5505ef0cd3dc5bc8d8b49b2c8e76da4b6a2ce852e\" returns successfully" Oct 8 20:23:13.341666 kubelet[3000]: E1008 20:23:13.338431 3000 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44090->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-1-0-f-c5c751ca26.17fc93e8474c40bb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-1-0-f-c5c751ca26,UID:b629d9214fd6b7d3cb4395ef8ce3b55f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-f-c5c751ca26,},FirstTimestamp:2024-10-08 20:23:02.841303227 +0000 UTC m=+358.012887625,LastTimestamp:2024-10-08 20:23:02.841303227 +0000 UTC m=+358.012887625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-f-c5c751ca26,}"