Dec 16 09:36:53.041632 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 16 09:36:53.041668 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:36:53.041677 kernel: BIOS-provided physical RAM map: Dec 16 09:36:53.041683 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 09:36:53.041689 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 09:36:53.041695 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 09:36:53.041702 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 16 09:36:53.041708 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 16 09:36:53.041717 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 09:36:53.041723 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 09:36:53.041741 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 09:36:53.041748 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 09:36:53.041754 kernel: NX (Execute Disable) protection: active Dec 16 09:36:53.041760 kernel: APIC: Static calls initialized Dec 16 09:36:53.041770 kernel: SMBIOS 2.8 present. Dec 16 09:36:53.041777 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 16 09:36:53.041784 kernel: Hypervisor detected: KVM Dec 16 09:36:53.041791 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 09:36:53.041797 kernel: kvm-clock: using sched offset of 2957122197 cycles Dec 16 09:36:53.041804 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 09:36:53.041812 kernel: tsc: Detected 2495.310 MHz processor Dec 16 09:36:53.041819 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 09:36:53.041826 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 09:36:53.041835 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 09:36:53.041842 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 09:36:53.041849 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 09:36:53.041856 kernel: Using GB pages for direct mapping Dec 16 09:36:53.041862 kernel: ACPI: Early table checksum verification disabled Dec 16 09:36:53.041869 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 16 09:36:53.041876 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:53.041883 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:53.041890 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:53.041899 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 16 09:36:53.041906 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:53.041913 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:53.041919 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:53.041926 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:53.041933 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 16 09:36:53.041940 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 16 09:36:53.041949 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 16 09:36:53.041964 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 16 09:36:53.041973 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 16 09:36:53.041982 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 16 09:36:53.041992 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 16 09:36:53.041999 kernel: No NUMA configuration found Dec 16 09:36:53.042006 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 16 09:36:53.042015 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 16 09:36:53.042022 kernel: Zone ranges: Dec 16 09:36:53.042029 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 09:36:53.042036 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 16 09:36:53.042043 kernel: Normal empty Dec 16 09:36:53.042050 kernel: Movable zone start for each node Dec 16 09:36:53.042057 kernel: Early memory node ranges Dec 16 09:36:53.042064 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 09:36:53.042071 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 16 09:36:53.042078 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 16 09:36:53.042088 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 09:36:53.042095 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 09:36:53.042102 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 09:36:53.042109 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 09:36:53.042116 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 09:36:53.042123 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 09:36:53.042130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 09:36:53.042137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 09:36:53.042144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 09:36:53.042153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 09:36:53.042160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 09:36:53.042167 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 09:36:53.042174 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 09:36:53.042182 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 16 09:36:53.042189 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 09:36:53.042196 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 09:36:53.042203 kernel: Booting paravirtualized kernel on KVM Dec 16 09:36:53.042210 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 09:36:53.042220 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 09:36:53.042227 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 16 09:36:53.042234 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 16 09:36:53.042241 kernel: pcpu-alloc: [0] 0 1 Dec 16 09:36:53.042248 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 09:36:53.042257 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:36:53.042264 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 16 09:36:53.042271 kernel: random: crng init done Dec 16 09:36:53.042281 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 09:36:53.042288 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 09:36:53.042295 kernel: Fallback order for Node 0: 0 Dec 16 09:36:53.042302 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 16 09:36:53.042309 kernel: Policy zone: DMA32 Dec 16 09:36:53.042316 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 09:36:53.042324 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 16 09:36:53.042331 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 09:36:53.042338 kernel: ftrace: allocating 37902 entries in 149 pages Dec 16 09:36:53.042348 kernel: ftrace: allocated 149 pages with 4 groups Dec 16 09:36:53.042355 kernel: Dynamic Preempt: voluntary Dec 16 09:36:53.042362 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 09:36:53.042370 kernel: rcu: RCU event tracing is enabled. Dec 16 09:36:53.042377 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 09:36:53.042385 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 09:36:53.042392 kernel: Rude variant of Tasks RCU enabled. Dec 16 09:36:53.042399 kernel: Tracing variant of Tasks RCU enabled. Dec 16 09:36:53.042406 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 09:36:53.042416 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 09:36:53.042423 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 09:36:53.042430 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 09:36:53.042437 kernel: Console: colour VGA+ 80x25 Dec 16 09:36:53.042444 kernel: printk: console [tty0] enabled Dec 16 09:36:53.042451 kernel: printk: console [ttyS0] enabled Dec 16 09:36:53.042459 kernel: ACPI: Core revision 20230628 Dec 16 09:36:53.042466 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 09:36:53.042473 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 09:36:53.042480 kernel: x2apic enabled Dec 16 09:36:53.042490 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 09:36:53.042497 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 09:36:53.042504 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 16 09:36:53.042511 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Dec 16 09:36:53.042518 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 09:36:53.042525 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 09:36:53.042533 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 09:36:53.042540 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 09:36:53.042556 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 09:36:53.042564 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 16 09:36:53.042571 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 16 09:36:53.042581 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 09:36:53.042588 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 09:36:53.042596 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 09:36:53.042603 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 09:36:53.042611 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 09:36:53.042619 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 09:36:53.042627 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 09:36:53.042634 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 09:36:53.042655 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 09:36:53.042662 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 09:36:53.042670 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 09:36:53.042678 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 09:36:53.042685 kernel: Freeing SMP alternatives memory: 32K Dec 16 09:36:53.042695 kernel: pid_max: default: 32768 minimum: 301 Dec 16 09:36:53.042702 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 16 09:36:53.042710 kernel: landlock: Up and running. Dec 16 09:36:53.042717 kernel: SELinux: Initializing. Dec 16 09:36:53.042736 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:36:53.042743 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:36:53.042762 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 09:36:53.042770 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:36:53.042778 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:36:53.042788 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:36:53.042796 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 09:36:53.042803 kernel: ... version: 0 Dec 16 09:36:53.042811 kernel: ... bit width: 48 Dec 16 09:36:53.042818 kernel: ... generic registers: 6 Dec 16 09:36:53.042826 kernel: ... value mask: 0000ffffffffffff Dec 16 09:36:53.042833 kernel: ... max period: 00007fffffffffff Dec 16 09:36:53.042841 kernel: ... fixed-purpose events: 0 Dec 16 09:36:53.042848 kernel: ... event mask: 000000000000003f Dec 16 09:36:53.042858 kernel: signal: max sigframe size: 1776 Dec 16 09:36:53.042865 kernel: rcu: Hierarchical SRCU implementation. Dec 16 09:36:53.042873 kernel: rcu: Max phase no-delay instances is 400. Dec 16 09:36:53.042880 kernel: smp: Bringing up secondary CPUs ... Dec 16 09:36:53.042888 kernel: smpboot: x86: Booting SMP configuration: Dec 16 09:36:53.042895 kernel: .... node #0, CPUs: #1 Dec 16 09:36:53.042903 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 09:36:53.042910 kernel: smpboot: Max logical packages: 1 Dec 16 09:36:53.042918 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Dec 16 09:36:53.042928 kernel: devtmpfs: initialized Dec 16 09:36:53.042935 kernel: x86/mm: Memory block size: 128MB Dec 16 09:36:53.042943 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 09:36:53.042950 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 09:36:53.042958 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 09:36:53.042965 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 09:36:53.042973 kernel: audit: initializing netlink subsys (disabled) Dec 16 09:36:53.042980 kernel: audit: type=2000 audit(1734341811.478:1): state=initialized audit_enabled=0 res=1 Dec 16 09:36:53.042988 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 09:36:53.042997 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 09:36:53.043005 kernel: cpuidle: using governor menu Dec 16 09:36:53.043012 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 09:36:53.043020 kernel: dca service started, version 1.12.1 Dec 16 09:36:53.043027 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 16 09:36:53.043035 kernel: PCI: Using configuration type 1 for base access Dec 16 09:36:53.043042 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 09:36:53.043050 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 09:36:53.043058 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 09:36:53.043068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 09:36:53.043075 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 09:36:53.043082 kernel: ACPI: Added _OSI(Module Device) Dec 16 09:36:53.043090 kernel: ACPI: Added _OSI(Processor Device) Dec 16 09:36:53.043097 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 16 09:36:53.043105 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 09:36:53.043112 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 09:36:53.043120 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 16 09:36:53.043127 kernel: ACPI: Interpreter enabled Dec 16 09:36:53.043137 kernel: ACPI: PM: (supports S0 S5) Dec 16 09:36:53.043145 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 09:36:53.043152 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 09:36:53.043160 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 09:36:53.043167 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 09:36:53.043175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 09:36:53.043353 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 09:36:53.043480 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 09:36:53.043605 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 09:36:53.043616 kernel: PCI host bridge to bus 0000:00 Dec 16 09:36:53.043784 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 09:36:53.043910 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 09:36:53.044020 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 09:36:53.044129 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 16 09:36:53.044237 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 09:36:53.044354 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 09:36:53.044462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 09:36:53.044598 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 16 09:36:53.046816 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 16 09:36:53.046982 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 16 09:36:53.047113 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 16 09:36:53.047243 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 16 09:36:53.047377 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 16 09:36:53.047497 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 09:36:53.047632 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.047789 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 16 09:36:53.047917 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.048037 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 16 09:36:53.048171 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.048297 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 16 09:36:53.048424 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.048545 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 16 09:36:53.048687 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.049898 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 16 09:36:53.050039 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.050159 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 16 09:36:53.050285 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.050404 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 16 09:36:53.050530 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.050660 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 16 09:36:53.051832 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:53.051970 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 16 09:36:53.052104 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 16 09:36:53.052234 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 09:36:53.052367 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 16 09:36:53.052486 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 16 09:36:53.052619 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 16 09:36:53.054793 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 16 09:36:53.054928 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 16 09:36:53.055063 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:36:53.055189 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 16 09:36:53.055314 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 09:36:53.055452 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 16 09:36:53.055579 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:36:53.055744 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:36:53.055867 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:36:53.056001 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 16 09:36:53.056127 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 16 09:36:53.056248 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:36:53.056372 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:36:53.056492 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:36:53.056626 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 16 09:36:53.056788 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 16 09:36:53.056916 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 16 09:36:53.057037 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:36:53.057185 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:36:53.057315 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:36:53.057455 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 16 09:36:53.057582 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 09:36:53.057718 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:36:53.057902 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:36:53.058021 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:36:53.058153 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 16 09:36:53.058277 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 16 09:36:53.058408 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:36:53.058526 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:36:53.058657 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:36:53.058920 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 16 09:36:53.059047 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 16 09:36:53.059170 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 16 09:36:53.059294 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:36:53.059416 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:36:53.059533 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:36:53.059543 kernel: acpiphp: Slot [0] registered Dec 16 09:36:53.059688 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:36:53.059846 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 16 09:36:53.059973 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 16 09:36:53.060097 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 16 09:36:53.060217 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:36:53.060342 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:36:53.060461 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:36:53.060470 kernel: acpiphp: Slot [0-2] registered Dec 16 09:36:53.060589 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:36:53.061035 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:36:53.061169 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:36:53.061180 kernel: acpiphp: Slot [0-3] registered Dec 16 09:36:53.061298 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:36:53.061421 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:36:53.061543 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:36:53.061557 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 09:36:53.061565 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 09:36:53.061573 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 09:36:53.061581 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 09:36:53.061589 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 09:36:53.061596 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 09:36:53.061604 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 09:36:53.061616 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 09:36:53.061624 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 09:36:53.061632 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 09:36:53.061653 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 09:36:53.061661 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 09:36:53.061668 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 09:36:53.061676 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 09:36:53.061684 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 09:36:53.061691 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 09:36:53.061702 kernel: iommu: Default domain type: Translated Dec 16 09:36:53.061709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 09:36:53.061717 kernel: PCI: Using ACPI for IRQ routing Dec 16 09:36:53.061765 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 09:36:53.061774 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 09:36:53.061782 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 16 09:36:53.061911 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 09:36:53.062029 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 09:36:53.062150 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 09:36:53.062160 kernel: vgaarb: loaded Dec 16 09:36:53.062168 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 09:36:53.062176 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 09:36:53.062184 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 09:36:53.062192 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 09:36:53.062200 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 09:36:53.062207 kernel: pnp: PnP ACPI init Dec 16 09:36:53.062334 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 09:36:53.062349 kernel: pnp: PnP ACPI: found 5 devices Dec 16 09:36:53.062357 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 09:36:53.062365 kernel: NET: Registered PF_INET protocol family Dec 16 09:36:53.063816 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 09:36:53.063826 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 09:36:53.063834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 09:36:53.063842 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 09:36:53.063850 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 09:36:53.063861 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 09:36:53.063869 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:36:53.063877 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:36:53.063884 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 09:36:53.063892 kernel: NET: Registered PF_XDP protocol family Dec 16 09:36:53.064027 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 09:36:53.064147 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 09:36:53.064265 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 09:36:53.064388 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 16 09:36:53.064506 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 16 09:36:53.064624 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 16 09:36:53.065817 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:36:53.065953 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:36:53.066073 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:36:53.066193 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:36:53.066313 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:36:53.066436 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:36:53.066558 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:36:53.066699 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:36:53.066850 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:36:53.066973 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:36:53.067093 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:36:53.067212 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:36:53.067335 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:36:53.067473 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:36:53.067592 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:36:53.068161 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:36:53.068298 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:36:53.068417 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:36:53.068536 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:36:53.068667 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 16 09:36:53.068863 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:36:53.068983 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:36:53.069106 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:36:53.069223 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 16 09:36:53.069340 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:36:53.069457 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:36:53.069574 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:36:53.069705 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 16 09:36:53.070876 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:36:53.071006 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:36:53.071126 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 09:36:53.071237 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 09:36:53.071350 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 09:36:53.071479 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 16 09:36:53.071623 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 09:36:53.072227 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 09:36:53.072374 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 09:36:53.072586 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:36:53.074802 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 09:36:53.074966 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:36:53.075114 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 09:36:53.075253 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:36:53.075438 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 09:36:53.075673 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:36:53.075879 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 09:36:53.076025 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:36:53.076183 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 09:36:53.076321 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:36:53.076528 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 16 09:36:53.076691 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 09:36:53.078046 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:36:53.078225 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 16 09:36:53.078404 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 16 09:36:53.078572 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:36:53.079173 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 16 09:36:53.079348 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 09:36:53.079517 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:36:53.079538 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 09:36:53.079552 kernel: PCI: CLS 0 bytes, default 64 Dec 16 09:36:53.079570 kernel: Initialise system trusted keyrings Dec 16 09:36:53.079582 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 09:36:53.079594 kernel: Key type asymmetric registered Dec 16 09:36:53.079606 kernel: Asymmetric key parser 'x509' registered Dec 16 09:36:53.079617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 16 09:36:53.079629 kernel: io scheduler mq-deadline registered Dec 16 09:36:53.079657 kernel: io scheduler kyber registered Dec 16 09:36:53.079669 kernel: io scheduler bfq registered Dec 16 09:36:53.079874 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 09:36:53.080057 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 09:36:53.081956 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 09:36:53.082154 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 09:36:53.082335 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 09:36:53.082513 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 09:36:53.082710 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 09:36:53.086006 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 09:36:53.086181 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 09:36:53.086361 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 09:36:53.086538 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 09:36:53.086746 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 09:36:53.086928 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 09:36:53.087114 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 09:36:53.087297 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 09:36:53.087475 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 09:36:53.087496 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 09:36:53.087688 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 16 09:36:53.093073 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 16 09:36:53.093110 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 09:36:53.093124 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 16 09:36:53.093137 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 09:36:53.093149 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 09:36:53.093162 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 09:36:53.093174 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 09:36:53.093186 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 09:36:53.093361 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 09:36:53.093381 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 09:36:53.093539 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 09:36:53.093720 kernel: rtc_cmos 00:03: setting system clock to 2024-12-16T09:36:52 UTC (1734341812) Dec 16 09:36:53.093907 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 09:36:53.093926 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 09:36:53.093939 kernel: NET: Registered PF_INET6 protocol family Dec 16 09:36:53.093950 kernel: Segment Routing with IPv6 Dec 16 09:36:53.093966 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 09:36:53.093978 kernel: NET: Registered PF_PACKET protocol family Dec 16 09:36:53.093990 kernel: Key type dns_resolver registered Dec 16 09:36:53.094001 kernel: IPI shorthand broadcast: enabled Dec 16 09:36:53.094012 kernel: sched_clock: Marking stable (1399016013, 149332065)->(1566015870, -17667792) Dec 16 09:36:53.094022 kernel: registered taskstats version 1 Dec 16 09:36:53.094033 kernel: Loading compiled-in X.509 certificates Dec 16 09:36:53.094043 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 16 09:36:53.094055 kernel: Key type .fscrypt registered Dec 16 09:36:53.094070 kernel: Key type fscrypt-provisioning registered Dec 16 09:36:53.094082 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 09:36:53.094092 kernel: ima: Allocated hash algorithm: sha1 Dec 16 09:36:53.094104 kernel: ima: No architecture policies found Dec 16 09:36:53.094116 kernel: clk: Disabling unused clocks Dec 16 09:36:53.094128 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 16 09:36:53.094140 kernel: Write protecting the kernel read-only data: 36864k Dec 16 09:36:53.094152 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 16 09:36:53.094161 kernel: Run /init as init process Dec 16 09:36:53.094173 kernel: with arguments: Dec 16 09:36:53.094181 kernel: /init Dec 16 09:36:53.094189 kernel: with environment: Dec 16 09:36:53.094197 kernel: HOME=/ Dec 16 09:36:53.094205 kernel: TERM=linux Dec 16 09:36:53.094213 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 16 09:36:53.094223 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:36:53.094234 systemd[1]: Detected virtualization kvm. Dec 16 09:36:53.094245 systemd[1]: Detected architecture x86-64. Dec 16 09:36:53.094254 systemd[1]: Running in initrd. Dec 16 09:36:53.094262 systemd[1]: No hostname configured, using default hostname. Dec 16 09:36:53.094270 systemd[1]: Hostname set to . Dec 16 09:36:53.094279 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:36:53.094287 systemd[1]: Queued start job for default target initrd.target. Dec 16 09:36:53.094296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:36:53.094305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:36:53.094317 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 09:36:53.094325 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:36:53.094337 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 09:36:53.094349 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 09:36:53.094364 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 09:36:53.094377 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 09:36:53.094389 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:36:53.094402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:36:53.094410 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:36:53.094419 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:36:53.094428 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:36:53.094439 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:36:53.094451 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:36:53.094463 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:36:53.094476 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 09:36:53.094492 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 16 09:36:53.094504 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:36:53.094516 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:36:53.094528 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:36:53.094541 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:36:53.094553 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 09:36:53.094566 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:36:53.094578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 09:36:53.094591 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 09:36:53.094608 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:36:53.094621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:36:53.094680 systemd-journald[188]: Collecting audit messages is disabled. Dec 16 09:36:53.094705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:36:53.094721 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 09:36:53.094783 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:36:53.094797 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 09:36:53.094811 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 09:36:53.094829 systemd-journald[188]: Journal started Dec 16 09:36:53.094855 systemd-journald[188]: Runtime Journal (/run/log/journal/1eaedca57c5a441d88bf598d8c6210dd) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:36:53.074191 systemd-modules-load[189]: Inserted module 'overlay' Dec 16 09:36:53.141444 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:36:53.141473 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 09:36:53.141487 kernel: Bridge firewalling registered Dec 16 09:36:53.112120 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 16 09:36:53.143205 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:36:53.144098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:36:53.145187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:36:53.154040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:36:53.161027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:36:53.172969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:36:53.175911 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:36:53.178107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:36:53.188911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:36:53.195241 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:36:53.202998 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 09:36:53.204852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:36:53.213999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:36:53.216895 dracut-cmdline[220]: dracut-dracut-053 Dec 16 09:36:53.219071 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:36:53.251487 systemd-resolved[224]: Positive Trust Anchors: Dec 16 09:36:53.251501 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:36:53.251533 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:36:53.258813 systemd-resolved[224]: Defaulting to hostname 'linux'. Dec 16 09:36:53.259983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:36:53.260821 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:36:53.301824 kernel: SCSI subsystem initialized Dec 16 09:36:53.311858 kernel: Loading iSCSI transport class v2.0-870. Dec 16 09:36:53.323781 kernel: iscsi: registered transport (tcp) Dec 16 09:36:53.346856 kernel: iscsi: registered transport (qla4xxx) Dec 16 09:36:53.346974 kernel: QLogic iSCSI HBA Driver Dec 16 09:36:53.407753 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 09:36:53.415015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 09:36:53.455879 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 09:36:53.455978 kernel: device-mapper: uevent: version 1.0.3 Dec 16 09:36:53.458035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 16 09:36:53.517772 kernel: raid6: avx2x4 gen() 23730 MB/s Dec 16 09:36:53.534777 kernel: raid6: avx2x2 gen() 30684 MB/s Dec 16 09:36:53.553047 kernel: raid6: avx2x1 gen() 23295 MB/s Dec 16 09:36:53.553130 kernel: raid6: using algorithm avx2x2 gen() 30684 MB/s Dec 16 09:36:53.571793 kernel: raid6: .... xor() 19271 MB/s, rmw enabled Dec 16 09:36:53.571864 kernel: raid6: using avx2x2 recovery algorithm Dec 16 09:36:53.593791 kernel: xor: automatically using best checksumming function avx Dec 16 09:36:53.761821 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 09:36:53.780851 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:36:53.789049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:36:53.801414 systemd-udevd[405]: Using default interface naming scheme 'v255'. Dec 16 09:36:53.806401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:36:53.816084 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 09:36:53.839694 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 16 09:36:53.875594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:36:53.880925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:36:53.958481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:36:53.971301 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 09:36:54.014807 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 09:36:54.017743 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:36:54.020165 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:36:54.021400 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:36:54.031402 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 09:36:54.051324 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:36:54.069762 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 09:36:54.074575 kernel: scsi host0: Virtio SCSI HBA Dec 16 09:36:54.100794 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 09:36:54.107453 kernel: AVX2 version of gcm_enc/dec engaged. Dec 16 09:36:54.107515 kernel: AES CTR mode by8 optimization enabled Dec 16 09:36:54.130108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:36:54.136157 kernel: ACPI: bus type USB registered Dec 16 09:36:54.130269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:36:54.132987 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:36:54.133466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:36:54.133687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:36:54.134389 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:36:54.143765 kernel: libata version 3.00 loaded. Dec 16 09:36:54.146053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:36:54.148302 kernel: usbcore: registered new interface driver usbfs Dec 16 09:36:54.153755 kernel: usbcore: registered new interface driver hub Dec 16 09:36:54.159119 kernel: usbcore: registered new device driver usb Dec 16 09:36:54.217768 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 09:36:54.253282 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 09:36:54.253298 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 16 09:36:54.253526 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 09:36:54.253691 kernel: scsi host1: ahci Dec 16 09:36:54.253855 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 09:36:54.254038 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 16 09:36:54.254234 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 09:36:54.254428 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 09:36:54.254681 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 09:36:54.254934 kernel: scsi host2: ahci Dec 16 09:36:54.255119 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:36:54.280980 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 09:36:54.281157 kernel: scsi host3: ahci Dec 16 09:36:54.281312 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 09:36:54.281325 kernel: GPT:17805311 != 80003071 Dec 16 09:36:54.281335 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 09:36:54.281345 kernel: GPT:17805311 != 80003071 Dec 16 09:36:54.281355 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 09:36:54.281365 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:36:54.281375 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 09:36:54.281541 kernel: scsi host4: ahci Dec 16 09:36:54.281705 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 09:36:54.281872 kernel: scsi host5: ahci Dec 16 09:36:54.282021 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:36:54.282163 kernel: scsi host6: ahci Dec 16 09:36:54.282304 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 09:36:54.282445 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 16 09:36:54.282460 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 09:36:54.282599 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 16 09:36:54.282623 kernel: hub 1-0:1.0: USB hub found Dec 16 09:36:54.283201 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 16 09:36:54.283214 kernel: hub 1-0:1.0: 4 ports detected Dec 16 09:36:54.283363 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 16 09:36:54.283375 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 09:36:54.283537 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 16 09:36:54.283585 kernel: hub 2-0:1.0: USB hub found Dec 16 09:36:54.283811 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 16 09:36:54.283824 kernel: hub 2-0:1.0: 4 ports detected Dec 16 09:36:54.279154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:36:54.290798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:36:54.308552 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (451) Dec 16 09:36:54.308653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:36:54.316753 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (464) Dec 16 09:36:54.321319 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 09:36:54.327718 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 09:36:54.337714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 09:36:54.338447 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 09:36:54.346467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:36:54.352938 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 09:36:54.365818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:36:54.366429 disk-uuid[571]: Primary Header is updated. Dec 16 09:36:54.366429 disk-uuid[571]: Secondary Entries is updated. Dec 16 09:36:54.366429 disk-uuid[571]: Secondary Header is updated. Dec 16 09:36:54.503842 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 09:36:54.562758 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 09:36:54.562854 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 09:36:54.562872 kernel: ata1.00: applying bridge limits Dec 16 09:36:54.562888 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.565062 kernel: ata1.00: configured for UDMA/100 Dec 16 09:36:54.565747 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.567110 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 09:36:54.569873 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.570748 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.573757 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.612054 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 09:36:54.629225 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 09:36:54.629265 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 16 09:36:54.645767 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 09:36:54.651208 kernel: usbcore: registered new interface driver usbhid Dec 16 09:36:54.651238 kernel: usbhid: USB HID core driver Dec 16 09:36:54.656894 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 09:36:54.656954 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 09:36:55.381843 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:36:55.384084 disk-uuid[572]: The operation has completed successfully. Dec 16 09:36:55.440849 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 09:36:55.441042 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 09:36:55.475079 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 09:36:55.479140 sh[595]: Success Dec 16 09:36:55.498767 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 16 09:36:55.560389 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 09:36:55.568924 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 09:36:55.573975 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 09:36:55.606212 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 16 09:36:55.606302 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:55.607843 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 16 09:36:55.610685 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 09:36:55.610749 kernel: BTRFS info (device dm-0): using free space tree Dec 16 09:36:55.620768 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 09:36:55.622719 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 09:36:55.624050 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 09:36:55.630004 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 09:36:55.632919 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 09:36:55.652085 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:55.652156 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:55.652172 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:36:55.657851 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:36:55.657934 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:36:55.670885 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 16 09:36:55.674881 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:55.680793 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 09:36:55.687065 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 09:36:55.789146 ignition[695]: Ignition 2.19.0 Dec 16 09:36:55.789162 ignition[695]: Stage: fetch-offline Dec 16 09:36:55.789207 ignition[695]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:55.789224 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:55.789364 ignition[695]: parsed url from cmdline: "" Dec 16 09:36:55.789370 ignition[695]: no config URL provided Dec 16 09:36:55.789377 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:36:55.789389 ignition[695]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:36:55.789401 ignition[695]: failed to fetch config: resource requires networking Dec 16 09:36:55.789708 ignition[695]: Ignition finished successfully Dec 16 09:36:55.795232 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:36:55.800498 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:36:55.807008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:36:55.842148 systemd-networkd[782]: lo: Link UP Dec 16 09:36:55.842161 systemd-networkd[782]: lo: Gained carrier Dec 16 09:36:55.845383 systemd-networkd[782]: Enumeration completed Dec 16 09:36:55.845668 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:36:55.846501 systemd[1]: Reached target network.target - Network. Dec 16 09:36:55.847079 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.847085 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:36:55.850762 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.850768 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:36:55.852915 systemd-networkd[782]: eth0: Link UP Dec 16 09:36:55.852920 systemd-networkd[782]: eth0: Gained carrier Dec 16 09:36:55.852929 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.853946 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 09:36:55.855254 systemd-networkd[782]: eth1: Link UP Dec 16 09:36:55.855259 systemd-networkd[782]: eth1: Gained carrier Dec 16 09:36:55.855270 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.870390 ignition[784]: Ignition 2.19.0 Dec 16 09:36:55.870403 ignition[784]: Stage: fetch Dec 16 09:36:55.870627 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:55.870641 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:55.871621 ignition[784]: parsed url from cmdline: "" Dec 16 09:36:55.871626 ignition[784]: no config URL provided Dec 16 09:36:55.871634 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:36:55.871646 ignition[784]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:36:55.871669 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 16 09:36:55.871895 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 09:36:55.879799 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:36:55.919870 systemd-networkd[782]: eth0: DHCPv4 address 5.75.242.71/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:36:56.072121 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 16 09:36:56.075123 ignition[784]: GET result: OK Dec 16 09:36:56.075206 ignition[784]: parsing config with SHA512: e339bbb4be9b5b4b38d0f0c2241cd2be15525f3db48746d6de037c546572914e1288ce3b939c0d74ac7391bc76b43e26fbe9ca9c5640d8213e391acc4b749352 Dec 16 09:36:56.079527 unknown[784]: fetched base config from "system" Dec 16 09:36:56.079543 unknown[784]: fetched base config from "system" Dec 16 09:36:56.079980 ignition[784]: fetch: fetch complete Dec 16 09:36:56.079575 unknown[784]: fetched user config from "hetzner" Dec 16 09:36:56.079987 ignition[784]: fetch: fetch passed Dec 16 09:36:56.084049 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 09:36:56.080036 ignition[784]: Ignition finished successfully Dec 16 09:36:56.091176 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 09:36:56.105412 ignition[792]: Ignition 2.19.0 Dec 16 09:36:56.105427 ignition[792]: Stage: kargs Dec 16 09:36:56.105654 ignition[792]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.105667 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.107691 ignition[792]: kargs: kargs passed Dec 16 09:36:56.107820 ignition[792]: Ignition finished successfully Dec 16 09:36:56.110612 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 09:36:56.118983 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 09:36:56.137354 ignition[798]: Ignition 2.19.0 Dec 16 09:36:56.138480 ignition[798]: Stage: disks Dec 16 09:36:56.138886 ignition[798]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.138907 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.140453 ignition[798]: disks: disks passed Dec 16 09:36:56.140526 ignition[798]: Ignition finished successfully Dec 16 09:36:56.143077 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 09:36:56.144191 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 09:36:56.145128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 09:36:56.146787 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:36:56.148442 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:36:56.149757 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:36:56.155973 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 09:36:56.176401 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 16 09:36:56.180007 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 09:36:56.185885 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 09:36:56.300763 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 16 09:36:56.301489 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 09:36:56.303061 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 09:36:56.315912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:36:56.319834 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 09:36:56.322923 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 09:36:56.326521 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 09:36:56.327996 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:36:56.337785 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Dec 16 09:36:56.346916 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:56.346991 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:56.347006 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:36:56.348525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 09:36:56.359128 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 09:36:56.366599 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:36:56.366796 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:36:56.372912 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:36:56.418621 coreos-metadata[817]: Dec 16 09:36:56.418 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 16 09:36:56.420501 coreos-metadata[817]: Dec 16 09:36:56.420 INFO Fetch successful Dec 16 09:36:56.420501 coreos-metadata[817]: Dec 16 09:36:56.420 INFO wrote hostname ci-4081-2-1-4-1bd0c0376a to /sysroot/etc/hostname Dec 16 09:36:56.423337 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 09:36:56.427289 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:36:56.431530 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Dec 16 09:36:56.438045 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 09:36:56.443990 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 09:36:56.566083 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 09:36:56.573051 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 09:36:56.577947 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 09:36:56.590758 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:56.603565 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 09:36:56.618997 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 09:36:56.626371 ignition[932]: INFO : Ignition 2.19.0 Dec 16 09:36:56.626371 ignition[932]: INFO : Stage: mount Dec 16 09:36:56.626371 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.626371 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.626371 ignition[932]: INFO : mount: mount passed Dec 16 09:36:56.626371 ignition[932]: INFO : Ignition finished successfully Dec 16 09:36:56.628060 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 09:36:56.632996 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 09:36:56.652175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:36:56.662749 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Dec 16 09:36:56.665126 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:56.665149 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:56.666874 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:36:56.672762 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:36:56.672802 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:36:56.676578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:36:56.700515 ignition[960]: INFO : Ignition 2.19.0 Dec 16 09:36:56.700515 ignition[960]: INFO : Stage: files Dec 16 09:36:56.702113 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.702113 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.702113 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 16 09:36:56.704633 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 09:36:56.704633 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 09:36:56.706962 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 09:36:56.707792 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 09:36:56.707792 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 09:36:56.707362 unknown[960]: wrote ssh authorized keys file for user: core Dec 16 09:36:56.710108 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 16 09:36:56.710108 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 16 09:36:56.769303 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 09:36:57.159998 systemd-networkd[782]: eth0: Gained IPv6LL Dec 16 09:36:57.167859 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 16 09:36:57.169430 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 09:36:57.169430 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 09:36:57.169430 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 09:36:57.169430 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 09:36:57.169430 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 09:36:57.169430 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 09:36:57.169430 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 09:36:57.177866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 09:36:57.177866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:36:57.177866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:36:57.177866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 16 09:36:57.177866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 16 09:36:57.177866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 16 09:36:57.177866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 16 09:36:57.736171 systemd-networkd[782]: eth1: Gained IPv6LL Dec 16 09:36:57.748529 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 09:36:58.079090 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 16 09:36:58.079090 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 09:36:58.084213 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:36:58.084213 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:36:58.084213 ignition[960]: INFO : files: files passed Dec 16 09:36:58.084213 ignition[960]: INFO : Ignition finished successfully Dec 16 09:36:58.083254 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 09:36:58.092911 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 09:36:58.096888 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 09:36:58.103791 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 09:36:58.105900 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 09:36:58.119133 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:36:58.121480 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:36:58.122369 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:36:58.125549 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:36:58.126422 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 09:36:58.132992 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 09:36:58.179402 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 09:36:58.179652 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 09:36:58.182053 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 09:36:58.183053 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 09:36:58.184543 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 09:36:58.191106 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 09:36:58.209794 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:36:58.214882 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 09:36:58.241326 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:36:58.243943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:36:58.245297 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 09:36:58.247409 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 09:36:58.247553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:36:58.249656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 09:36:58.250581 systemd[1]: Stopped target basic.target - Basic System. Dec 16 09:36:58.252347 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 09:36:58.253940 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:36:58.255583 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 09:36:58.257550 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 09:36:58.259313 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:36:58.261276 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 09:36:58.262981 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 09:36:58.264854 systemd[1]: Stopped target swap.target - Swaps. Dec 16 09:36:58.266547 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 09:36:58.266707 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:36:58.268718 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:36:58.269708 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:36:58.271257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 09:36:58.271387 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:36:58.273142 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 09:36:58.273269 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 09:36:58.275964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 09:36:58.276121 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:36:58.277107 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 09:36:58.277270 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 09:36:58.278489 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 09:36:58.278638 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:36:58.291313 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 09:36:58.291875 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 09:36:58.292037 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:36:58.295929 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 09:36:58.298042 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 09:36:58.298219 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:36:58.302546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 09:36:58.302665 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:36:58.308929 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 09:36:58.309034 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 09:36:58.318413 ignition[1013]: INFO : Ignition 2.19.0 Dec 16 09:36:58.318413 ignition[1013]: INFO : Stage: umount Dec 16 09:36:58.329115 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:58.329115 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:58.329115 ignition[1013]: INFO : umount: umount passed Dec 16 09:36:58.329115 ignition[1013]: INFO : Ignition finished successfully Dec 16 09:36:58.326297 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 09:36:58.326431 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 09:36:58.327607 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 09:36:58.327688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 09:36:58.332006 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 09:36:58.332077 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 09:36:58.335140 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 09:36:58.335268 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 09:36:58.337405 systemd[1]: Stopped target network.target - Network. Dec 16 09:36:58.343768 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 09:36:58.343832 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:36:58.346213 systemd[1]: Stopped target paths.target - Path Units. Dec 16 09:36:58.346842 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 09:36:58.350863 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:36:58.351722 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 09:36:58.356846 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 09:36:58.357461 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 09:36:58.357552 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:36:58.358152 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 09:36:58.358204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:36:58.359388 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 09:36:58.359450 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 09:36:58.359945 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 09:36:58.359992 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 09:36:58.360633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 09:36:58.366872 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 09:36:58.370778 systemd-networkd[782]: eth1: DHCPv6 lease lost Dec 16 09:36:58.372606 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 09:36:58.375838 systemd-networkd[782]: eth0: DHCPv6 lease lost Dec 16 09:36:58.379100 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 09:36:58.379264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 09:36:58.383121 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 09:36:58.384316 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 09:36:58.385668 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 09:36:58.385891 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 09:36:58.388051 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 09:36:58.388115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:36:58.389337 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 09:36:58.389388 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 09:36:58.394865 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 09:36:58.395378 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 09:36:58.395436 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:36:58.396625 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 09:36:58.396675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:36:58.397990 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 09:36:58.398039 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 09:36:58.400554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 09:36:58.400603 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:36:58.401770 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:36:58.414164 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 09:36:58.414992 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 09:36:58.423894 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 09:36:58.424150 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:36:58.425565 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 09:36:58.425625 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 09:36:58.426457 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 09:36:58.426515 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:36:58.427633 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 09:36:58.427695 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:36:58.429359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 09:36:58.429415 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 09:36:58.430590 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:36:58.430647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:36:58.437990 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 09:36:58.438671 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 09:36:58.438758 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:36:58.440216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:36:58.440316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:36:58.448518 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 09:36:58.448696 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 09:36:58.450768 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 09:36:58.457000 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 09:36:58.468006 systemd[1]: Switching root. Dec 16 09:36:58.501137 systemd-journald[188]: Journal stopped Dec 16 09:36:59.887070 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 16 09:36:59.887162 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 09:36:59.887180 kernel: SELinux: policy capability open_perms=1 Dec 16 09:36:59.887194 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 09:36:59.887215 kernel: SELinux: policy capability always_check_network=0 Dec 16 09:36:59.887229 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 09:36:59.887250 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 09:36:59.887263 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 09:36:59.887277 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 09:36:59.887301 kernel: audit: type=1403 audit(1734341818.728:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 09:36:59.887317 systemd[1]: Successfully loaded SELinux policy in 77.906ms. Dec 16 09:36:59.887350 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.125ms. Dec 16 09:36:59.887367 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:36:59.887382 systemd[1]: Detected virtualization kvm. Dec 16 09:36:59.887397 systemd[1]: Detected architecture x86-64. Dec 16 09:36:59.887412 systemd[1]: Detected first boot. Dec 16 09:36:59.887427 systemd[1]: Hostname set to . Dec 16 09:36:59.887446 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:36:59.887474 zram_generator::config[1055]: No configuration found. Dec 16 09:36:59.887496 systemd[1]: Populated /etc with preset unit settings. Dec 16 09:36:59.887512 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 09:36:59.887527 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 09:36:59.887543 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 09:36:59.887558 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 09:36:59.887574 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 09:36:59.887593 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 09:36:59.887608 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 09:36:59.887623 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 09:36:59.887640 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 09:36:59.887655 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 09:36:59.887671 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 09:36:59.887686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:36:59.887701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:36:59.887718 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 09:36:59.888761 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 09:36:59.888778 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 09:36:59.888791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:36:59.888803 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 09:36:59.888814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:36:59.888826 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 09:36:59.888838 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 09:36:59.888852 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 09:36:59.888868 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 09:36:59.888880 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:36:59.888891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:36:59.888903 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:36:59.888915 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:36:59.888942 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 09:36:59.888955 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 09:36:59.888969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:36:59.888981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:36:59.888993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:36:59.889005 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 09:36:59.889016 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 09:36:59.889028 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 09:36:59.889040 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 09:36:59.889052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:36:59.889065 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 09:36:59.889080 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 09:36:59.889096 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 09:36:59.889110 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 09:36:59.889122 systemd[1]: Reached target machines.target - Containers. Dec 16 09:36:59.889134 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 09:36:59.889146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:36:59.889160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:36:59.889172 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 09:36:59.889184 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:36:59.889195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:36:59.889209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:36:59.889221 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 09:36:59.889233 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:36:59.889245 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 09:36:59.889259 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 09:36:59.889271 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 09:36:59.889283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 09:36:59.889295 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 09:36:59.889307 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:36:59.889318 kernel: loop: module loaded Dec 16 09:36:59.889330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:36:59.889342 kernel: fuse: init (API version 7.39) Dec 16 09:36:59.889354 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 09:36:59.889368 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 09:36:59.889381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:36:59.889393 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 09:36:59.889404 systemd[1]: Stopped verity-setup.service. Dec 16 09:36:59.889417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:36:59.889429 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 09:36:59.889441 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 09:36:59.889490 systemd-journald[1142]: Collecting audit messages is disabled. Dec 16 09:36:59.889515 kernel: ACPI: bus type drm_connector registered Dec 16 09:36:59.889528 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 09:36:59.889540 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 09:36:59.891772 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 09:36:59.891801 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 09:36:59.891815 systemd-journald[1142]: Journal started Dec 16 09:36:59.891839 systemd-journald[1142]: Runtime Journal (/run/log/journal/1eaedca57c5a441d88bf598d8c6210dd) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:36:59.487358 systemd[1]: Queued start job for default target multi-user.target. Dec 16 09:36:59.512292 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 09:36:59.513000 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 09:36:59.895851 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:36:59.896444 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 09:36:59.897326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:36:59.898332 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 09:36:59.898575 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 09:36:59.899433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:36:59.899716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:36:59.900577 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:36:59.900813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:36:59.901682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:36:59.902323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:36:59.903332 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 09:36:59.903604 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 09:36:59.904416 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:36:59.904651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:36:59.905492 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:36:59.906372 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 09:36:59.907587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 09:36:59.925162 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 09:36:59.934085 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 09:36:59.941082 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 09:36:59.941761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 09:36:59.941880 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:36:59.943550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 16 09:36:59.962010 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 09:36:59.967869 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 09:36:59.968651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:36:59.975965 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 09:36:59.979923 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 09:36:59.980953 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:36:59.987920 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 09:36:59.988642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:36:59.997914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:37:00.005003 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 09:37:00.024849 systemd-journald[1142]: Time spent on flushing to /var/log/journal/1eaedca57c5a441d88bf598d8c6210dd is 38.950ms for 1128 entries. Dec 16 09:37:00.024849 systemd-journald[1142]: System Journal (/var/log/journal/1eaedca57c5a441d88bf598d8c6210dd) is 8.0M, max 584.8M, 576.8M free. Dec 16 09:37:00.098990 systemd-journald[1142]: Received client request to flush runtime journal. Dec 16 09:37:00.010831 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 09:37:00.104858 kernel: loop0: detected capacity change from 0 to 8 Dec 16 09:37:00.018438 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 09:37:00.019865 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 09:37:00.022842 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 09:37:00.071469 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 09:37:00.072192 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 09:37:00.080185 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 16 09:37:00.103682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 09:37:00.114308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:37:00.128843 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:37:00.135807 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 09:37:00.142934 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 16 09:37:00.154222 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 16 09:37:00.158614 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 09:37:00.160973 kernel: loop1: detected capacity change from 0 to 142488 Dec 16 09:37:00.160821 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 16 09:37:00.174434 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 09:37:00.183977 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:37:00.212034 kernel: loop2: detected capacity change from 0 to 205544 Dec 16 09:37:00.233071 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 16 09:37:00.233524 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 16 09:37:00.251868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:37:00.276869 kernel: loop3: detected capacity change from 0 to 140768 Dec 16 09:37:00.326759 kernel: loop4: detected capacity change from 0 to 8 Dec 16 09:37:00.331758 kernel: loop5: detected capacity change from 0 to 142488 Dec 16 09:37:00.361402 kernel: loop6: detected capacity change from 0 to 205544 Dec 16 09:37:00.395830 kernel: loop7: detected capacity change from 0 to 140768 Dec 16 09:37:00.425603 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 16 09:37:00.426782 (sd-merge)[1201]: Merged extensions into '/usr'. Dec 16 09:37:00.437027 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 09:37:00.437198 systemd[1]: Reloading... Dec 16 09:37:00.570776 zram_generator::config[1227]: No configuration found. Dec 16 09:37:00.623826 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 09:37:00.720761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:37:00.784030 systemd[1]: Reloading finished in 345 ms. Dec 16 09:37:00.810552 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 09:37:00.811566 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 09:37:00.812577 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 09:37:00.823971 systemd[1]: Starting ensure-sysext.service... Dec 16 09:37:00.826830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:37:00.835997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:37:00.839823 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Dec 16 09:37:00.839835 systemd[1]: Reloading... Dec 16 09:37:00.862627 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 09:37:00.863266 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 09:37:00.864599 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 09:37:00.866593 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 16 09:37:00.866702 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 16 09:37:00.872873 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:37:00.872890 systemd-tmpfiles[1272]: Skipping /boot Dec 16 09:37:00.875975 systemd-udevd[1273]: Using default interface naming scheme 'v255'. Dec 16 09:37:00.894931 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:37:00.895063 systemd-tmpfiles[1272]: Skipping /boot Dec 16 09:37:00.918800 zram_generator::config[1299]: No configuration found. Dec 16 09:37:01.037808 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1303) Dec 16 09:37:01.041785 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1303) Dec 16 09:37:01.151059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:37:01.158824 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1304) Dec 16 09:37:01.195759 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 09:37:01.204758 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 09:37:01.223923 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 09:37:01.224426 systemd[1]: Reloading finished in 384 ms. Dec 16 09:37:01.243953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:37:01.244943 kernel: ACPI: button: Power Button [PWRF] Dec 16 09:37:01.246143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:37:01.295859 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 16 09:37:01.296964 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:01.310268 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:37:01.316796 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 09:37:01.317866 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 16 09:37:01.318084 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 09:37:01.324816 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 09:37:01.325565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:37:01.334766 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 16 09:37:01.334835 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:37:01.340048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:37:01.350027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:37:01.350993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:37:01.361069 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 09:37:01.367803 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:37:01.380076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:37:01.385052 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 09:37:01.385831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:01.388344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:37:01.389802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:37:01.399657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:37:01.399928 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:37:01.405815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:01.406070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:37:01.412648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:37:01.415745 kernel: EDAC MC: Ver: 3.0.0 Dec 16 09:37:01.413977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:37:01.414146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:37:01.414276 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:01.416300 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:37:01.417086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:37:01.430142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:01.430524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:37:01.442345 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:37:01.449104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:37:01.456051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:37:01.456685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:37:01.456949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:01.458801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:37:01.459917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:37:01.462120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:37:01.462611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:37:01.471028 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 16 09:37:01.471077 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 16 09:37:01.476075 kernel: Console: switching to colour dummy device 80x25 Dec 16 09:37:01.477585 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 09:37:01.477624 kernel: [drm] features: -context_init Dec 16 09:37:01.479755 kernel: [drm] number of scanouts: 1 Dec 16 09:37:01.479796 kernel: [drm] number of cap sets: 0 Dec 16 09:37:01.480880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:37:01.484815 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 16 09:37:01.493044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 09:37:01.493173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:37:01.500578 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 09:37:01.500634 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 09:37:01.505748 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 09:37:01.507715 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 09:37:01.519808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:37:01.524376 systemd[1]: Finished ensure-sysext.service. Dec 16 09:37:01.526612 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:37:01.527422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:37:01.530831 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:37:01.531076 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:37:01.537881 augenrules[1416]: No rules Dec 16 09:37:01.542512 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:37:01.550831 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 09:37:01.556789 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 09:37:01.572400 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:37:01.581038 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 09:37:01.588963 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 09:37:01.590525 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 09:37:01.592011 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 09:37:01.610233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:37:01.610516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:37:01.613900 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 09:37:01.623391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:37:01.624781 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 09:37:01.638238 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 16 09:37:01.651000 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 16 09:37:01.675065 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 09:37:01.684016 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:37:01.708548 systemd-networkd[1393]: lo: Link UP Dec 16 09:37:01.708559 systemd-networkd[1393]: lo: Gained carrier Dec 16 09:37:01.713717 systemd-networkd[1393]: Enumeration completed Dec 16 09:37:01.713892 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:37:01.716160 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:01.716176 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:37:01.717145 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:01.717155 systemd-networkd[1393]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:37:01.718807 systemd-networkd[1393]: eth0: Link UP Dec 16 09:37:01.718812 systemd-networkd[1393]: eth0: Gained carrier Dec 16 09:37:01.718829 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:01.724999 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 09:37:01.725470 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 16 09:37:01.727065 systemd-networkd[1393]: eth1: Link UP Dec 16 09:37:01.727073 systemd-networkd[1393]: eth1: Gained carrier Dec 16 09:37:01.727095 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:01.727484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:37:01.737029 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 16 09:37:01.759233 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:37:01.762802 systemd-networkd[1393]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:37:01.775589 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 09:37:01.777456 systemd-resolved[1394]: Positive Trust Anchors: Dec 16 09:37:01.777471 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:37:01.777503 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:37:01.778547 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 09:37:01.785429 systemd-resolved[1394]: Using system hostname 'ci-4081-2-1-4-1bd0c0376a'. Dec 16 09:37:01.787585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:37:01.790083 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 16 09:37:01.792935 systemd[1]: Reached target network.target - Network. Dec 16 09:37:01.795003 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:37:01.795185 systemd-networkd[1393]: eth0: DHCPv4 address 5.75.242.71/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:37:01.795873 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Dec 16 09:37:01.797459 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Dec 16 09:37:01.807768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:37:01.809386 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:37:01.811449 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 09:37:01.813095 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 09:37:01.814976 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 09:37:01.816861 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 09:37:01.818634 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 09:37:01.820154 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 09:37:01.820248 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:37:01.821890 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:37:01.824714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 09:37:01.828693 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 09:37:01.837747 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 09:37:01.841313 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 09:37:01.843417 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:37:01.845071 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:37:01.846495 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:37:01.847032 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:37:01.855827 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 09:37:01.859364 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 09:37:01.863907 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 09:37:01.868121 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 09:37:01.875893 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 09:37:01.882346 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 09:37:01.889374 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 09:37:01.898763 jq[1462]: false Dec 16 09:37:01.900897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 09:37:01.908064 coreos-metadata[1460]: Dec 16 09:37:01.905 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 16 09:37:01.908465 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 16 09:37:01.912924 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 09:37:01.926537 coreos-metadata[1460]: Dec 16 09:37:01.912 INFO Fetch successful Dec 16 09:37:01.926141 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 09:37:01.928511 coreos-metadata[1460]: Dec 16 09:37:01.928 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 16 09:37:01.931387 coreos-metadata[1460]: Dec 16 09:37:01.929 INFO Fetch successful Dec 16 09:37:01.943086 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 09:37:01.948366 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 09:37:01.950154 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 09:37:01.951933 dbus-daemon[1461]: [system] SELinux support is enabled Dec 16 09:37:01.964083 extend-filesystems[1465]: Found loop4 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found loop5 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found loop6 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found loop7 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda1 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda2 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda3 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found usr Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda4 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda6 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda7 Dec 16 09:37:01.985561 extend-filesystems[1465]: Found sda9 Dec 16 09:37:01.985561 extend-filesystems[1465]: Checking size of /dev/sda9 Dec 16 09:37:02.085881 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 16 09:37:01.964944 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 09:37:02.086073 extend-filesystems[1465]: Resized partition /dev/sda9 Dec 16 09:37:01.980208 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 09:37:02.090122 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Dec 16 09:37:02.102593 jq[1482]: true Dec 16 09:37:01.994787 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 09:37:02.103795 update_engine[1476]: I20241216 09:37:02.017472 1476 main.cc:92] Flatcar Update Engine starting Dec 16 09:37:02.103795 update_engine[1476]: I20241216 09:37:02.020035 1476 update_check_scheduler.cc:74] Next update check in 5m55s Dec 16 09:37:02.014142 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 09:37:02.014427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 09:37:02.014879 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 09:37:02.015068 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 09:37:02.040244 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 09:37:02.113834 jq[1494]: true Dec 16 09:37:02.040827 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 09:37:02.079970 (ntainerd)[1495]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 09:37:02.096302 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 09:37:02.096341 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 09:37:02.098708 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 09:37:02.101594 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 09:37:02.137308 systemd[1]: Started update-engine.service - Update Engine. Dec 16 09:37:02.149681 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 09:37:02.158415 systemd-logind[1475]: New seat seat0. Dec 16 09:37:02.160963 systemd-logind[1475]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 09:37:02.168222 tar[1491]: linux-amd64/helm Dec 16 09:37:02.161015 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 09:37:02.161422 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 09:37:02.210782 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 16 09:37:02.234770 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 09:37:02.234770 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 16 09:37:02.234770 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 16 09:37:02.257627 extend-filesystems[1465]: Resized filesystem in /dev/sda9 Dec 16 09:37:02.257627 extend-filesystems[1465]: Found sr0 Dec 16 09:37:02.235514 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 09:37:02.240151 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 09:37:02.279378 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1323) Dec 16 09:37:02.272981 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 09:37:02.277325 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 09:37:02.293924 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:37:02.300841 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 09:37:02.313107 systemd[1]: Starting sshkeys.service... Dec 16 09:37:02.364263 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 09:37:02.376917 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 09:37:02.459993 coreos-metadata[1534]: Dec 16 09:37:02.459 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 16 09:37:02.463137 coreos-metadata[1534]: Dec 16 09:37:02.462 INFO Fetch successful Dec 16 09:37:02.465819 unknown[1534]: wrote ssh authorized keys file for user: core Dec 16 09:37:02.511995 update-ssh-keys[1544]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:37:02.514410 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 09:37:02.527268 systemd[1]: Finished sshkeys.service. Dec 16 09:37:02.599867 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 09:37:02.670401 containerd[1495]: time="2024-12-16T09:37:02.670293822Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 16 09:37:02.734937 containerd[1495]: time="2024-12-16T09:37:02.734777337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:02.738042 containerd[1495]: time="2024-12-16T09:37:02.737956419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:02.738042 containerd[1495]: time="2024-12-16T09:37:02.738007034Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 16 09:37:02.738042 containerd[1495]: time="2024-12-16T09:37:02.738025639Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 16 09:37:02.738441 containerd[1495]: time="2024-12-16T09:37:02.738243508Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 16 09:37:02.738441 containerd[1495]: time="2024-12-16T09:37:02.738264247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:02.738441 containerd[1495]: time="2024-12-16T09:37:02.738346691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:02.738441 containerd[1495]: time="2024-12-16T09:37:02.738361379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740003 containerd[1495]: time="2024-12-16T09:37:02.739904684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740003 containerd[1495]: time="2024-12-16T09:37:02.739931724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740003 containerd[1495]: time="2024-12-16T09:37:02.739955429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740003 containerd[1495]: time="2024-12-16T09:37:02.739966540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740106 containerd[1495]: time="2024-12-16T09:37:02.740081475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740669 containerd[1495]: time="2024-12-16T09:37:02.740351502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740669 containerd[1495]: time="2024-12-16T09:37:02.740513486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:02.740669 containerd[1495]: time="2024-12-16T09:37:02.740531589Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 16 09:37:02.740669 containerd[1495]: time="2024-12-16T09:37:02.740644521Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 16 09:37:02.740779 containerd[1495]: time="2024-12-16T09:37:02.740707850Z" level=info msg="metadata content store policy set" policy=shared Dec 16 09:37:02.749107 containerd[1495]: time="2024-12-16T09:37:02.749049676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 16 09:37:02.749182 containerd[1495]: time="2024-12-16T09:37:02.749155515Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 16 09:37:02.749661 containerd[1495]: time="2024-12-16T09:37:02.749185080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 16 09:37:02.749661 containerd[1495]: time="2024-12-16T09:37:02.749270811Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 16 09:37:02.749661 containerd[1495]: time="2024-12-16T09:37:02.749293634Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 16 09:37:02.749661 containerd[1495]: time="2024-12-16T09:37:02.749577867Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 16 09:37:02.751959 containerd[1495]: time="2024-12-16T09:37:02.751906174Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 16 09:37:02.752151 containerd[1495]: time="2024-12-16T09:37:02.752120826Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 16 09:37:02.752192 containerd[1495]: time="2024-12-16T09:37:02.752152626Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 16 09:37:02.752213 containerd[1495]: time="2024-12-16T09:37:02.752188453Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 16 09:37:02.752213 containerd[1495]: time="2024-12-16T09:37:02.752203592Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752256 containerd[1495]: time="2024-12-16T09:37:02.752234089Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752276 containerd[1495]: time="2024-12-16T09:37:02.752249127Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752299 containerd[1495]: time="2024-12-16T09:37:02.752280085Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752318 containerd[1495]: time="2024-12-16T09:37:02.752295774Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752318 containerd[1495]: time="2024-12-16T09:37:02.752308258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752356 containerd[1495]: time="2024-12-16T09:37:02.752325019Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752356 containerd[1495]: time="2024-12-16T09:37:02.752351459Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 16 09:37:02.752415 containerd[1495]: time="2024-12-16T09:37:02.752370925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752415 containerd[1495]: time="2024-12-16T09:37:02.752395331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752415 containerd[1495]: time="2024-12-16T09:37:02.752406582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752478 containerd[1495]: time="2024-12-16T09:37:02.752436097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752478 containerd[1495]: time="2024-12-16T09:37:02.752455273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752478 containerd[1495]: time="2024-12-16T09:37:02.752468558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752538 containerd[1495]: time="2024-12-16T09:37:02.752479559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752538 containerd[1495]: time="2024-12-16T09:37:02.752517581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752538 containerd[1495]: time="2024-12-16T09:37:02.752531937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752593 containerd[1495]: time="2024-12-16T09:37:02.752546364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752593 containerd[1495]: time="2024-12-16T09:37:02.752558597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752593 containerd[1495]: time="2024-12-16T09:37:02.752570610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752650 containerd[1495]: time="2024-12-16T09:37:02.752597210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752650 containerd[1495]: time="2024-12-16T09:37:02.752614542Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 16 09:37:02.752650 containerd[1495]: time="2024-12-16T09:37:02.752633959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752650 containerd[1495]: time="2024-12-16T09:37:02.752645751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.752745 containerd[1495]: time="2024-12-16T09:37:02.752676919Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752804588Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752828864Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752839734Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752850725Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752859742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752890329Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752900508Z" level=info msg="NRI interface is disabled by configuration." Dec 16 09:37:02.753639 containerd[1495]: time="2024-12-16T09:37:02.752910998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 16 09:37:02.753833 containerd[1495]: time="2024-12-16T09:37:02.753204418Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 16 09:37:02.753833 containerd[1495]: time="2024-12-16T09:37:02.753260434Z" level=info msg="Connect containerd service" Dec 16 09:37:02.753833 containerd[1495]: time="2024-12-16T09:37:02.753306049Z" level=info msg="using legacy CRI server" Dec 16 09:37:02.753833 containerd[1495]: time="2024-12-16T09:37:02.753313303Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 09:37:02.753833 containerd[1495]: time="2024-12-16T09:37:02.753428489Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 16 09:37:02.755318 containerd[1495]: time="2024-12-16T09:37:02.755155899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 09:37:02.755644 containerd[1495]: time="2024-12-16T09:37:02.755447345Z" level=info msg="Start subscribing containerd event" Dec 16 09:37:02.755644 containerd[1495]: time="2024-12-16T09:37:02.755502048Z" level=info msg="Start recovering state" Dec 16 09:37:02.755644 containerd[1495]: time="2024-12-16T09:37:02.755556120Z" level=info msg="Start event monitor" Dec 16 09:37:02.755644 containerd[1495]: time="2024-12-16T09:37:02.755570516Z" level=info msg="Start snapshots syncer" Dec 16 09:37:02.755644 containerd[1495]: time="2024-12-16T09:37:02.755578611Z" level=info msg="Start cni network conf syncer for default" Dec 16 09:37:02.755644 containerd[1495]: time="2024-12-16T09:37:02.755586176Z" level=info msg="Start streaming server" Dec 16 09:37:02.756923 containerd[1495]: time="2024-12-16T09:37:02.756855577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 09:37:02.756923 containerd[1495]: time="2024-12-16T09:37:02.756908125Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 09:37:02.757057 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 09:37:02.767008 containerd[1495]: time="2024-12-16T09:37:02.766950541Z" level=info msg="containerd successfully booted in 0.097842s" Dec 16 09:37:02.857891 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 09:37:02.884290 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 09:37:02.897836 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 09:37:02.909440 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 09:37:02.909704 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 09:37:02.922607 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 09:37:02.937202 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 09:37:02.939642 tar[1491]: linux-amd64/LICENSE Dec 16 09:37:02.939817 tar[1491]: linux-amd64/README.md Dec 16 09:37:02.952977 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 09:37:02.969269 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 09:37:02.971599 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 09:37:02.973364 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 09:37:02.983860 systemd-networkd[1393]: eth0: Gained IPv6LL Dec 16 09:37:02.984460 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Dec 16 09:37:02.987226 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 09:37:02.989103 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 09:37:02.995933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:02.999831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 09:37:03.031267 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 09:37:03.624012 systemd-networkd[1393]: eth1: Gained IPv6LL Dec 16 09:37:03.624548 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Dec 16 09:37:03.818914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:03.820425 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 09:37:03.824563 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:03.826513 systemd[1]: Startup finished in 1.599s (kernel) + 5.900s (initrd) + 5.174s (userspace) = 12.674s. Dec 16 09:37:04.417336 kubelet[1590]: E1216 09:37:04.417260 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:04.421291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:04.421534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:04.422168 systemd[1]: kubelet.service: Consumed 1.020s CPU time. Dec 16 09:37:14.672896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 09:37:14.680102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:14.905021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:14.917069 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:14.970988 kubelet[1610]: E1216 09:37:14.970807 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:14.977319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:14.977527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:25.059032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 09:37:25.072329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:25.227074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:25.241418 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:25.296647 kubelet[1625]: E1216 09:37:25.296597 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:25.301799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:25.302004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:33.972691 systemd-timesyncd[1434]: Contacted time server 85.215.189.120:123 (2.flatcar.pool.ntp.org). Dec 16 09:37:33.972883 systemd-timesyncd[1434]: Initial clock synchronization to Mon 2024-12-16 09:37:34.242438 UTC. Dec 16 09:37:35.310191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 09:37:35.317228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:35.511640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:35.526411 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:35.584047 kubelet[1641]: E1216 09:37:35.583833 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:35.589644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:35.590160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:45.809550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 09:37:45.818009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:46.008475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:46.013862 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:46.060486 kubelet[1656]: E1216 09:37:46.060324 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:46.068052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:46.068336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:47.197951 update_engine[1476]: I20241216 09:37:47.197811 1476 update_attempter.cc:509] Updating boot flags... Dec 16 09:37:47.271832 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1673) Dec 16 09:37:47.346063 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1669) Dec 16 09:37:47.398856 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1669) Dec 16 09:37:56.309505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 09:37:56.319207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:56.558721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:56.563576 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:56.630906 kubelet[1693]: E1216 09:37:56.630802 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:56.639161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:56.639581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:06.809554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 16 09:38:06.816203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:07.005003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:07.007547 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:07.043993 kubelet[1708]: E1216 09:38:07.043861 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:07.046850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:07.047065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:17.059195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 16 09:38:17.069095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:17.242081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:17.256167 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:17.305111 kubelet[1723]: E1216 09:38:17.305051 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:17.308423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:17.308691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:27.559280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 16 09:38:27.571176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:27.747404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:27.753084 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:27.800686 kubelet[1738]: E1216 09:38:27.800552 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:27.805285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:27.805515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:37.808982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 16 09:38:37.822175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:38.015069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:38.017300 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:38.062159 kubelet[1753]: E1216 09:38:38.062020 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:38.067488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:38.067721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:48.309511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 16 09:38:48.317655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:48.531289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:48.547340 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:48.597586 kubelet[1769]: E1216 09:38:48.597438 1769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:48.601213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:48.601426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:58.809312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 16 09:38:58.816081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:59.025522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:59.030409 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:59.064014 kubelet[1784]: E1216 09:38:59.063842 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:59.066907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:59.067281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:39:00.483482 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 09:39:00.491325 systemd[1]: Started sshd@0-5.75.242.71:22-147.75.109.163:45152.service - OpenSSH per-connection server daemon (147.75.109.163:45152). Dec 16 09:39:01.503406 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 45152 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:01.509688 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:01.530595 systemd-logind[1475]: New session 1 of user core. Dec 16 09:39:01.534367 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 09:39:01.542539 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 09:39:01.560371 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 09:39:01.571265 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 09:39:01.576072 (systemd)[1797]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 09:39:01.740829 systemd[1797]: Queued start job for default target default.target. Dec 16 09:39:01.752448 systemd[1797]: Created slice app.slice - User Application Slice. Dec 16 09:39:01.752489 systemd[1797]: Reached target paths.target - Paths. Dec 16 09:39:01.752509 systemd[1797]: Reached target timers.target - Timers. Dec 16 09:39:01.754305 systemd[1797]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 09:39:01.777074 systemd[1797]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 09:39:01.777264 systemd[1797]: Reached target sockets.target - Sockets. Dec 16 09:39:01.777286 systemd[1797]: Reached target basic.target - Basic System. Dec 16 09:39:01.777343 systemd[1797]: Reached target default.target - Main User Target. Dec 16 09:39:01.777389 systemd[1797]: Startup finished in 188ms. Dec 16 09:39:01.777560 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 09:39:01.788048 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 09:39:02.492234 systemd[1]: Started sshd@1-5.75.242.71:22-147.75.109.163:45162.service - OpenSSH per-connection server daemon (147.75.109.163:45162). Dec 16 09:39:03.498337 sshd[1808]: Accepted publickey for core from 147.75.109.163 port 45162 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:03.500773 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:03.506345 systemd-logind[1475]: New session 2 of user core. Dec 16 09:39:03.512876 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 09:39:04.192634 sshd[1808]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:04.200459 systemd[1]: sshd@1-5.75.242.71:22-147.75.109.163:45162.service: Deactivated successfully. Dec 16 09:39:04.205654 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 09:39:04.209886 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Dec 16 09:39:04.212785 systemd-logind[1475]: Removed session 2. Dec 16 09:39:04.374182 systemd[1]: Started sshd@2-5.75.242.71:22-147.75.109.163:45170.service - OpenSSH per-connection server daemon (147.75.109.163:45170). Dec 16 09:39:05.390045 sshd[1815]: Accepted publickey for core from 147.75.109.163 port 45170 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:05.393222 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:05.403068 systemd-logind[1475]: New session 3 of user core. Dec 16 09:39:05.411087 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 09:39:06.073879 sshd[1815]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:06.078197 systemd[1]: sshd@2-5.75.242.71:22-147.75.109.163:45170.service: Deactivated successfully. Dec 16 09:39:06.080814 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 09:39:06.081605 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Dec 16 09:39:06.083210 systemd-logind[1475]: Removed session 3. Dec 16 09:39:06.255285 systemd[1]: Started sshd@3-5.75.242.71:22-147.75.109.163:54270.service - OpenSSH per-connection server daemon (147.75.109.163:54270). Dec 16 09:39:07.254495 sshd[1822]: Accepted publickey for core from 147.75.109.163 port 54270 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:07.257953 sshd[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:07.266314 systemd-logind[1475]: New session 4 of user core. Dec 16 09:39:07.280114 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 09:39:07.947427 sshd[1822]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:07.950481 systemd[1]: sshd@3-5.75.242.71:22-147.75.109.163:54270.service: Deactivated successfully. Dec 16 09:39:07.952917 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 09:39:07.954617 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Dec 16 09:39:07.956501 systemd-logind[1475]: Removed session 4. Dec 16 09:39:08.126202 systemd[1]: Started sshd@4-5.75.242.71:22-147.75.109.163:54272.service - OpenSSH per-connection server daemon (147.75.109.163:54272). Dec 16 09:39:09.138499 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 54272 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:09.141917 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:09.143588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 16 09:39:09.153133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:09.160301 systemd-logind[1475]: New session 5 of user core. Dec 16 09:39:09.166626 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 09:39:09.368069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:09.370630 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:39:09.422594 kubelet[1840]: E1216 09:39:09.422354 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:39:09.426387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:39:09.426809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:39:09.684415 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 09:39:09.684985 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:09.705286 sudo[1848]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:09.867795 sshd[1829]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:09.873667 systemd[1]: sshd@4-5.75.242.71:22-147.75.109.163:54272.service: Deactivated successfully. Dec 16 09:39:09.878014 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 09:39:09.880836 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Dec 16 09:39:09.883186 systemd-logind[1475]: Removed session 5. Dec 16 09:39:10.046693 systemd[1]: Started sshd@5-5.75.242.71:22-147.75.109.163:54284.service - OpenSSH per-connection server daemon (147.75.109.163:54284). Dec 16 09:39:11.044110 sshd[1853]: Accepted publickey for core from 147.75.109.163 port 54284 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:11.048202 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:11.058028 systemd-logind[1475]: New session 6 of user core. Dec 16 09:39:11.070026 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 09:39:11.573843 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 09:39:11.574863 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:11.582398 sudo[1857]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:11.595439 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 16 09:39:11.596174 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:11.623358 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 16 09:39:11.638828 auditctl[1860]: No rules Dec 16 09:39:11.641148 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 09:39:11.641843 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 16 09:39:11.652501 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:39:11.721321 augenrules[1878]: No rules Dec 16 09:39:11.723433 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:39:11.726565 sudo[1856]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:11.887063 sshd[1853]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:11.896229 systemd[1]: sshd@5-5.75.242.71:22-147.75.109.163:54284.service: Deactivated successfully. Dec 16 09:39:11.899546 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 09:39:11.902119 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Dec 16 09:39:11.904559 systemd-logind[1475]: Removed session 6. Dec 16 09:39:12.063342 systemd[1]: Started sshd@6-5.75.242.71:22-147.75.109.163:54292.service - OpenSSH per-connection server daemon (147.75.109.163:54292). Dec 16 09:39:13.037782 sshd[1886]: Accepted publickey for core from 147.75.109.163 port 54292 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:13.040446 sshd[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:13.047818 systemd-logind[1475]: New session 7 of user core. Dec 16 09:39:13.054985 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 09:39:13.563096 sudo[1889]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 09:39:13.564016 sudo[1889]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:14.006139 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 09:39:14.020573 (dockerd)[1905]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 09:39:14.427496 dockerd[1905]: time="2024-12-16T09:39:14.427407257Z" level=info msg="Starting up" Dec 16 09:39:14.574243 dockerd[1905]: time="2024-12-16T09:39:14.574156285Z" level=info msg="Loading containers: start." Dec 16 09:39:14.743798 kernel: Initializing XFRM netlink socket Dec 16 09:39:14.881785 systemd-networkd[1393]: docker0: Link UP Dec 16 09:39:14.917564 dockerd[1905]: time="2024-12-16T09:39:14.917489347Z" level=info msg="Loading containers: done." Dec 16 09:39:14.941927 dockerd[1905]: time="2024-12-16T09:39:14.941845639Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 09:39:14.942215 dockerd[1905]: time="2024-12-16T09:39:14.941976138Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 16 09:39:14.942215 dockerd[1905]: time="2024-12-16T09:39:14.942099817Z" level=info msg="Daemon has completed initialization" Dec 16 09:39:15.005201 dockerd[1905]: time="2024-12-16T09:39:15.004563445Z" level=info msg="API listen on /run/docker.sock" Dec 16 09:39:15.005909 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 09:39:16.291926 containerd[1495]: time="2024-12-16T09:39:16.291849166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 16 09:39:16.962244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558795701.mount: Deactivated successfully. Dec 16 09:39:17.874075 containerd[1495]: time="2024-12-16T09:39:17.873993652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:17.875403 containerd[1495]: time="2024-12-16T09:39:17.875351640Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975575" Dec 16 09:39:17.876424 containerd[1495]: time="2024-12-16T09:39:17.876354584Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:17.884067 containerd[1495]: time="2024-12-16T09:39:17.883392273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:17.884067 containerd[1495]: time="2024-12-16T09:39:17.883546909Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 1.591635969s" Dec 16 09:39:17.884067 containerd[1495]: time="2024-12-16T09:39:17.883590899Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 16 09:39:17.885535 containerd[1495]: time="2024-12-16T09:39:17.885494295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 16 09:39:19.132215 containerd[1495]: time="2024-12-16T09:39:19.132153862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:19.133353 containerd[1495]: time="2024-12-16T09:39:19.133309243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702177" Dec 16 09:39:19.134482 containerd[1495]: time="2024-12-16T09:39:19.134406045Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:19.137318 containerd[1495]: time="2024-12-16T09:39:19.137276825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:19.138371 containerd[1495]: time="2024-12-16T09:39:19.138239549Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.252706311s" Dec 16 09:39:19.138371 containerd[1495]: time="2024-12-16T09:39:19.138276266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 16 09:39:19.139141 containerd[1495]: time="2024-12-16T09:39:19.139100944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 16 09:39:19.559196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 16 09:39:19.569118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:19.784183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:19.794009 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:39:19.845106 kubelet[2110]: E1216 09:39:19.844804 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:39:19.849870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:39:19.850079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:39:20.108793 containerd[1495]: time="2024-12-16T09:39:20.108644588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:20.109871 containerd[1495]: time="2024-12-16T09:39:20.109827333Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652087" Dec 16 09:39:20.111222 containerd[1495]: time="2024-12-16T09:39:20.111177238Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:20.114771 containerd[1495]: time="2024-12-16T09:39:20.114666312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:20.116088 containerd[1495]: time="2024-12-16T09:39:20.115931720Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 976.788497ms" Dec 16 09:39:20.116088 containerd[1495]: time="2024-12-16T09:39:20.115972485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 16 09:39:20.116694 containerd[1495]: time="2024-12-16T09:39:20.116661484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 16 09:39:21.145234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046719674.mount: Deactivated successfully. Dec 16 09:39:21.518054 containerd[1495]: time="2024-12-16T09:39:21.517676484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:21.519094 containerd[1495]: time="2024-12-16T09:39:21.519044166Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230269" Dec 16 09:39:21.520234 containerd[1495]: time="2024-12-16T09:39:21.520189604Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:21.522935 containerd[1495]: time="2024-12-16T09:39:21.522842675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:21.523598 containerd[1495]: time="2024-12-16T09:39:21.523441428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.406746081s" Dec 16 09:39:21.523598 containerd[1495]: time="2024-12-16T09:39:21.523479868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 16 09:39:21.524205 containerd[1495]: time="2024-12-16T09:39:21.524001558Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 16 09:39:22.077511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413411789.mount: Deactivated successfully. Dec 16 09:39:22.881118 containerd[1495]: time="2024-12-16T09:39:22.881031626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:22.882277 containerd[1495]: time="2024-12-16T09:39:22.882219978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 16 09:39:22.883876 containerd[1495]: time="2024-12-16T09:39:22.883821237Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:22.887150 containerd[1495]: time="2024-12-16T09:39:22.887074301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:22.888255 containerd[1495]: time="2024-12-16T09:39:22.888122782Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.364094213s" Dec 16 09:39:22.888255 containerd[1495]: time="2024-12-16T09:39:22.888158208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 16 09:39:22.888987 containerd[1495]: time="2024-12-16T09:39:22.888966803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 09:39:23.372696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083781551.mount: Deactivated successfully. Dec 16 09:39:23.379852 containerd[1495]: time="2024-12-16T09:39:23.379715271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:23.381323 containerd[1495]: time="2024-12-16T09:39:23.381239551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Dec 16 09:39:23.382864 containerd[1495]: time="2024-12-16T09:39:23.382722835Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:23.386985 containerd[1495]: time="2024-12-16T09:39:23.386876142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:23.389226 containerd[1495]: time="2024-12-16T09:39:23.388363214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 499.297006ms" Dec 16 09:39:23.389226 containerd[1495]: time="2024-12-16T09:39:23.388412366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 09:39:23.389595 containerd[1495]: time="2024-12-16T09:39:23.389536190Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 16 09:39:23.986641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951283605.mount: Deactivated successfully. Dec 16 09:39:25.465762 containerd[1495]: time="2024-12-16T09:39:25.465656235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:25.471547 containerd[1495]: time="2024-12-16T09:39:25.470036017Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:25.471547 containerd[1495]: time="2024-12-16T09:39:25.470136564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780035" Dec 16 09:39:25.475622 containerd[1495]: time="2024-12-16T09:39:25.475562280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:25.477027 containerd[1495]: time="2024-12-16T09:39:25.476986020Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.087404616s" Dec 16 09:39:25.477117 containerd[1495]: time="2024-12-16T09:39:25.477100494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 16 09:39:28.613663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:28.632115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:28.667950 systemd[1]: Reloading requested from client PID 2258 ('systemctl') (unit session-7.scope)... Dec 16 09:39:28.667976 systemd[1]: Reloading... Dec 16 09:39:28.832287 zram_generator::config[2301]: No configuration found. Dec 16 09:39:28.967995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:39:29.058189 systemd[1]: Reloading finished in 389 ms. Dec 16 09:39:29.120463 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 09:39:29.120904 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 09:39:29.121639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:29.135473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:29.305100 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 09:39:29.305979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:29.356400 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:29.356400 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 09:39:29.356400 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:29.356400 kubelet[2353]: I1216 09:39:29.355986 2353 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 09:39:29.719627 kubelet[2353]: I1216 09:39:29.719559 2353 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 16 09:39:29.719627 kubelet[2353]: I1216 09:39:29.719608 2353 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 09:39:29.719978 kubelet[2353]: I1216 09:39:29.719948 2353 server.go:929] "Client rotation is on, will bootstrap in background" Dec 16 09:39:29.749869 kubelet[2353]: I1216 09:39:29.749507 2353 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 09:39:29.749869 kubelet[2353]: E1216 09:39:29.749825 2353 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://5.75.242.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:29.763971 kubelet[2353]: E1216 09:39:29.763926 2353 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 16 09:39:29.763971 kubelet[2353]: I1216 09:39:29.763960 2353 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 16 09:39:29.770283 kubelet[2353]: I1216 09:39:29.770228 2353 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 09:39:29.771462 kubelet[2353]: I1216 09:39:29.771431 2353 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 16 09:39:29.771670 kubelet[2353]: I1216 09:39:29.771625 2353 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 09:39:29.771846 kubelet[2353]: I1216 09:39:29.771658 2353 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-4-1bd0c0376a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 09:39:29.771846 kubelet[2353]: I1216 09:39:29.771842 2353 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 09:39:29.771952 kubelet[2353]: I1216 09:39:29.771851 2353 container_manager_linux.go:300] "Creating device plugin manager" Dec 16 09:39:29.771990 kubelet[2353]: I1216 09:39:29.771973 2353 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:29.774039 kubelet[2353]: I1216 09:39:29.773816 2353 kubelet.go:408] "Attempting to sync node with API server" Dec 16 09:39:29.774039 kubelet[2353]: I1216 09:39:29.773842 2353 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 09:39:29.774039 kubelet[2353]: I1216 09:39:29.773877 2353 kubelet.go:314] "Adding apiserver pod source" Dec 16 09:39:29.774039 kubelet[2353]: I1216 09:39:29.773893 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 09:39:29.776397 kubelet[2353]: W1216 09:39:29.776294 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://5.75.242.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-1bd0c0376a&limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:29.776397 kubelet[2353]: E1216 09:39:29.776357 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://5.75.242.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-1bd0c0376a&limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:29.781263 kubelet[2353]: W1216 09:39:29.781147 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://5.75.242.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:29.781263 kubelet[2353]: E1216 09:39:29.781194 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://5.75.242.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:29.783248 kubelet[2353]: I1216 09:39:29.783132 2353 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 16 09:39:29.785000 kubelet[2353]: I1216 09:39:29.784938 2353 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 09:39:29.786282 kubelet[2353]: W1216 09:39:29.786228 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 09:39:29.787459 kubelet[2353]: I1216 09:39:29.787352 2353 server.go:1269] "Started kubelet" Dec 16 09:39:29.788762 kubelet[2353]: I1216 09:39:29.788348 2353 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 09:39:29.789854 kubelet[2353]: I1216 09:39:29.789338 2353 server.go:460] "Adding debug handlers to kubelet server" Dec 16 09:39:29.791895 kubelet[2353]: I1216 09:39:29.791652 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 09:39:29.792288 kubelet[2353]: I1216 09:39:29.792226 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 09:39:29.792663 kubelet[2353]: I1216 09:39:29.792647 2353 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 09:39:29.797450 kubelet[2353]: E1216 09:39:29.793683 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://5.75.242.71:6443/api/v1/namespaces/default/events\": dial tcp 5.75.242.71:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-4-1bd0c0376a.18119ed5243993d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-1bd0c0376a,UID:ci-4081-2-1-4-1bd0c0376a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-1bd0c0376a,},FirstTimestamp:2024-12-16 09:39:29.787327443 +0000 UTC m=+0.475977076,LastTimestamp:2024-12-16 09:39:29.787327443 +0000 UTC m=+0.475977076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-1bd0c0376a,}" Dec 16 09:39:29.798754 kubelet[2353]: I1216 09:39:29.798125 2353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 09:39:29.801762 kubelet[2353]: I1216 09:39:29.801375 2353 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 16 09:39:29.801962 kubelet[2353]: E1216 09:39:29.801936 2353 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-4-1bd0c0376a\" not found" Dec 16 09:39:29.802704 kubelet[2353]: I1216 09:39:29.802687 2353 factory.go:221] Registration of the systemd container factory successfully Dec 16 09:39:29.802898 kubelet[2353]: I1216 09:39:29.802883 2353 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 09:39:29.804579 kubelet[2353]: I1216 09:39:29.804560 2353 factory.go:221] Registration of the containerd container factory successfully Dec 16 09:39:29.806451 kubelet[2353]: I1216 09:39:29.806420 2353 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 09:39:29.806522 kubelet[2353]: I1216 09:39:29.806498 2353 reconciler.go:26] "Reconciler: start to sync state" Dec 16 09:39:29.820409 kubelet[2353]: E1216 09:39:29.820365 2353 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 09:39:29.820596 kubelet[2353]: E1216 09:39:29.820519 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.242.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-1bd0c0376a?timeout=10s\": dial tcp 5.75.242.71:6443: connect: connection refused" interval="200ms" Dec 16 09:39:29.822034 kubelet[2353]: W1216 09:39:29.821963 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://5.75.242.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:29.822192 kubelet[2353]: E1216 09:39:29.822167 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://5.75.242.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:29.824996 kubelet[2353]: I1216 09:39:29.824942 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 09:39:29.831625 kubelet[2353]: I1216 09:39:29.831593 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 09:39:29.831805 kubelet[2353]: I1216 09:39:29.831795 2353 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 09:39:29.831869 kubelet[2353]: I1216 09:39:29.831861 2353 kubelet.go:2321] "Starting kubelet main sync loop" Dec 16 09:39:29.831980 kubelet[2353]: E1216 09:39:29.831943 2353 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 09:39:29.839169 kubelet[2353]: W1216 09:39:29.838471 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://5.75.242.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:29.839169 kubelet[2353]: E1216 09:39:29.838564 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://5.75.242.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:29.840942 kubelet[2353]: I1216 09:39:29.840925 2353 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 16 09:39:29.841049 kubelet[2353]: I1216 09:39:29.841035 2353 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 16 09:39:29.841114 kubelet[2353]: I1216 09:39:29.841105 2353 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:29.844184 kubelet[2353]: I1216 09:39:29.844166 2353 policy_none.go:49] "None policy: Start" Dec 16 09:39:29.844965 kubelet[2353]: I1216 09:39:29.844950 2353 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 09:39:29.845109 kubelet[2353]: I1216 09:39:29.845099 2353 state_mem.go:35] "Initializing new in-memory state store" Dec 16 09:39:29.853592 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 09:39:29.867111 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 09:39:29.870663 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 09:39:29.882498 kubelet[2353]: I1216 09:39:29.882209 2353 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 09:39:29.882498 kubelet[2353]: I1216 09:39:29.882495 2353 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 09:39:29.882671 kubelet[2353]: I1216 09:39:29.882508 2353 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 09:39:29.883628 kubelet[2353]: I1216 09:39:29.883104 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 09:39:29.886259 kubelet[2353]: E1216 09:39:29.886173 2353 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-4-1bd0c0376a\" not found" Dec 16 09:39:29.947405 systemd[1]: Created slice kubepods-burstable-pod16b495e66112052dd810ce4d0f927eb1.slice - libcontainer container kubepods-burstable-pod16b495e66112052dd810ce4d0f927eb1.slice. Dec 16 09:39:29.963178 systemd[1]: Created slice kubepods-burstable-pod20180db9af20cd9bb9cbea26130f55c7.slice - libcontainer container kubepods-burstable-pod20180db9af20cd9bb9cbea26130f55c7.slice. Dec 16 09:39:29.971469 systemd[1]: Created slice kubepods-burstable-pod6a064ac7e6c0462d763cef4421ab86d8.slice - libcontainer container kubepods-burstable-pod6a064ac7e6c0462d763cef4421ab86d8.slice. Dec 16 09:39:29.986190 kubelet[2353]: I1216 09:39:29.986068 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:29.986858 kubelet[2353]: E1216 09:39:29.986633 2353 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.242.71:6443/api/v1/nodes\": dial tcp 5.75.242.71:6443: connect: connection refused" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.021684 kubelet[2353]: E1216 09:39:30.021618 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.242.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-1bd0c0376a?timeout=10s\": dial tcp 5.75.242.71:6443: connect: connection refused" interval="400ms" Dec 16 09:39:30.107267 kubelet[2353]: I1216 09:39:30.107213 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16b495e66112052dd810ce4d0f927eb1-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-4-1bd0c0376a\" (UID: \"16b495e66112052dd810ce4d0f927eb1\") " pod="kube-system/kube-scheduler-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107267 kubelet[2353]: I1216 09:39:30.107255 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107267 kubelet[2353]: I1216 09:39:30.107280 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107267 kubelet[2353]: I1216 09:39:30.107298 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107267 kubelet[2353]: I1216 09:39:30.107312 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107869 kubelet[2353]: I1216 09:39:30.107329 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107869 kubelet[2353]: I1216 09:39:30.107344 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20180db9af20cd9bb9cbea26130f55c7-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-4-1bd0c0376a\" (UID: \"20180db9af20cd9bb9cbea26130f55c7\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107869 kubelet[2353]: I1216 09:39:30.107360 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20180db9af20cd9bb9cbea26130f55c7-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-4-1bd0c0376a\" (UID: \"20180db9af20cd9bb9cbea26130f55c7\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.107869 kubelet[2353]: I1216 09:39:30.107379 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20180db9af20cd9bb9cbea26130f55c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-4-1bd0c0376a\" (UID: \"20180db9af20cd9bb9cbea26130f55c7\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.190383 kubelet[2353]: I1216 09:39:30.190146 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.190890 kubelet[2353]: E1216 09:39:30.190806 2353 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.242.71:6443/api/v1/nodes\": dial tcp 5.75.242.71:6443: connect: connection refused" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.258272 containerd[1495]: time="2024-12-16T09:39:30.257711013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-4-1bd0c0376a,Uid:16b495e66112052dd810ce4d0f927eb1,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:30.268667 containerd[1495]: time="2024-12-16T09:39:30.268608606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-4-1bd0c0376a,Uid:20180db9af20cd9bb9cbea26130f55c7,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:30.277279 containerd[1495]: time="2024-12-16T09:39:30.277175210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-4-1bd0c0376a,Uid:6a064ac7e6c0462d763cef4421ab86d8,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:30.422371 kubelet[2353]: E1216 09:39:30.422218 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.242.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-1bd0c0376a?timeout=10s\": dial tcp 5.75.242.71:6443: connect: connection refused" interval="800ms" Dec 16 09:39:30.593352 kubelet[2353]: I1216 09:39:30.593305 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.593699 kubelet[2353]: E1216 09:39:30.593647 2353 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.242.71:6443/api/v1/nodes\": dial tcp 5.75.242.71:6443: connect: connection refused" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:30.642673 kubelet[2353]: W1216 09:39:30.642557 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://5.75.242.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-1bd0c0376a&limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:30.642673 kubelet[2353]: E1216 09:39:30.642679 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://5.75.242.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4-1bd0c0376a&limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:30.699075 kubelet[2353]: W1216 09:39:30.699000 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://5.75.242.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:30.699248 kubelet[2353]: E1216 09:39:30.699084 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://5.75.242.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:30.758580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132852640.mount: Deactivated successfully. Dec 16 09:39:30.764016 kubelet[2353]: W1216 09:39:30.763850 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://5.75.242.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:30.764248 kubelet[2353]: E1216 09:39:30.764055 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://5.75.242.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:30.769433 containerd[1495]: time="2024-12-16T09:39:30.769332962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:30.771152 containerd[1495]: time="2024-12-16T09:39:30.771055528Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:30.772725 containerd[1495]: time="2024-12-16T09:39:30.772639334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 16 09:39:30.773827 containerd[1495]: time="2024-12-16T09:39:30.773658448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 16 09:39:30.774673 containerd[1495]: time="2024-12-16T09:39:30.774612550Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:30.776487 containerd[1495]: time="2024-12-16T09:39:30.776383026Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 16 09:39:30.776706 containerd[1495]: time="2024-12-16T09:39:30.776647303Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:30.782931 containerd[1495]: time="2024-12-16T09:39:30.782889249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:30.785117 containerd[1495]: time="2024-12-16T09:39:30.784907230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.645739ms" Dec 16 09:39:30.792076 containerd[1495]: time="2024-12-16T09:39:30.791869020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.052369ms" Dec 16 09:39:30.794633 containerd[1495]: time="2024-12-16T09:39:30.794588338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.896986ms" Dec 16 09:39:30.967702 containerd[1495]: time="2024-12-16T09:39:30.967113646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:30.967702 containerd[1495]: time="2024-12-16T09:39:30.967172266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:30.967702 containerd[1495]: time="2024-12-16T09:39:30.967198425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:30.967702 containerd[1495]: time="2024-12-16T09:39:30.967327808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:30.973021 containerd[1495]: time="2024-12-16T09:39:30.972806862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:30.973021 containerd[1495]: time="2024-12-16T09:39:30.972913272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:30.973307 containerd[1495]: time="2024-12-16T09:39:30.972969548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:30.973561 containerd[1495]: time="2024-12-16T09:39:30.973284469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:30.977398 containerd[1495]: time="2024-12-16T09:39:30.976967118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:30.977398 containerd[1495]: time="2024-12-16T09:39:30.977039613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:30.977398 containerd[1495]: time="2024-12-16T09:39:30.977068548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:30.977398 containerd[1495]: time="2024-12-16T09:39:30.977287660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:31.011007 systemd[1]: Started cri-containerd-e71fee4f8201097f6e5c3abac9debffcc971de051b96a76bf8bbab6bbffa83a9.scope - libcontainer container e71fee4f8201097f6e5c3abac9debffcc971de051b96a76bf8bbab6bbffa83a9. Dec 16 09:39:31.016619 systemd[1]: Started cri-containerd-3bcd7816072843d36764698bbd8deb6385e6c8ef420d6bdc67c034a0158de0dc.scope - libcontainer container 3bcd7816072843d36764698bbd8deb6385e6c8ef420d6bdc67c034a0158de0dc. Dec 16 09:39:31.019105 systemd[1]: Started cri-containerd-d47d8e512bf7936b8f03862f4bad0fa80e63c60cff9059c0cedc13c3d0a5f1cb.scope - libcontainer container d47d8e512bf7936b8f03862f4bad0fa80e63c60cff9059c0cedc13c3d0a5f1cb. Dec 16 09:39:31.080179 containerd[1495]: time="2024-12-16T09:39:31.080070177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-4-1bd0c0376a,Uid:20180db9af20cd9bb9cbea26130f55c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e71fee4f8201097f6e5c3abac9debffcc971de051b96a76bf8bbab6bbffa83a9\"" Dec 16 09:39:31.087474 containerd[1495]: time="2024-12-16T09:39:31.087340790Z" level=info msg="CreateContainer within sandbox \"e71fee4f8201097f6e5c3abac9debffcc971de051b96a76bf8bbab6bbffa83a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 09:39:31.101199 containerd[1495]: time="2024-12-16T09:39:31.101151412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-4-1bd0c0376a,Uid:16b495e66112052dd810ce4d0f927eb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d47d8e512bf7936b8f03862f4bad0fa80e63c60cff9059c0cedc13c3d0a5f1cb\"" Dec 16 09:39:31.104671 containerd[1495]: time="2024-12-16T09:39:31.104456799Z" level=info msg="CreateContainer within sandbox \"d47d8e512bf7936b8f03862f4bad0fa80e63c60cff9059c0cedc13c3d0a5f1cb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 09:39:31.109857 containerd[1495]: time="2024-12-16T09:39:31.109812822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-4-1bd0c0376a,Uid:6a064ac7e6c0462d763cef4421ab86d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bcd7816072843d36764698bbd8deb6385e6c8ef420d6bdc67c034a0158de0dc\"" Dec 16 09:39:31.112954 containerd[1495]: time="2024-12-16T09:39:31.112820979Z" level=info msg="CreateContainer within sandbox \"3bcd7816072843d36764698bbd8deb6385e6c8ef420d6bdc67c034a0158de0dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 09:39:31.122616 containerd[1495]: time="2024-12-16T09:39:31.122543723Z" level=info msg="CreateContainer within sandbox \"e71fee4f8201097f6e5c3abac9debffcc971de051b96a76bf8bbab6bbffa83a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4ed6248185c3a518554a4c903a597dfc44efe6568bc4e0cd1ad645270a8301d\"" Dec 16 09:39:31.124694 containerd[1495]: time="2024-12-16T09:39:31.123413940Z" level=info msg="StartContainer for \"b4ed6248185c3a518554a4c903a597dfc44efe6568bc4e0cd1ad645270a8301d\"" Dec 16 09:39:31.133652 containerd[1495]: time="2024-12-16T09:39:31.133483786Z" level=info msg="CreateContainer within sandbox \"3bcd7816072843d36764698bbd8deb6385e6c8ef420d6bdc67c034a0158de0dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca\"" Dec 16 09:39:31.134834 containerd[1495]: time="2024-12-16T09:39:31.134792278Z" level=info msg="StartContainer for \"f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca\"" Dec 16 09:39:31.140209 containerd[1495]: time="2024-12-16T09:39:31.140085201Z" level=info msg="CreateContainer within sandbox \"d47d8e512bf7936b8f03862f4bad0fa80e63c60cff9059c0cedc13c3d0a5f1cb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87e02cfc0f5abdff6cfe82755e675e53eaea7049e1a62975889cc1d64863eeeb\"" Dec 16 09:39:31.141750 containerd[1495]: time="2024-12-16T09:39:31.141595111Z" level=info msg="StartContainer for \"87e02cfc0f5abdff6cfe82755e675e53eaea7049e1a62975889cc1d64863eeeb\"" Dec 16 09:39:31.155040 systemd[1]: Started cri-containerd-b4ed6248185c3a518554a4c903a597dfc44efe6568bc4e0cd1ad645270a8301d.scope - libcontainer container b4ed6248185c3a518554a4c903a597dfc44efe6568bc4e0cd1ad645270a8301d. Dec 16 09:39:31.194328 systemd[1]: Started cri-containerd-87e02cfc0f5abdff6cfe82755e675e53eaea7049e1a62975889cc1d64863eeeb.scope - libcontainer container 87e02cfc0f5abdff6cfe82755e675e53eaea7049e1a62975889cc1d64863eeeb. Dec 16 09:39:31.197195 systemd[1]: Started cri-containerd-f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca.scope - libcontainer container f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca. Dec 16 09:39:31.216547 containerd[1495]: time="2024-12-16T09:39:31.216473858Z" level=info msg="StartContainer for \"b4ed6248185c3a518554a4c903a597dfc44efe6568bc4e0cd1ad645270a8301d\" returns successfully" Dec 16 09:39:31.224556 kubelet[2353]: E1216 09:39:31.223817 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.242.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4-1bd0c0376a?timeout=10s\": dial tcp 5.75.242.71:6443: connect: connection refused" interval="1.6s" Dec 16 09:39:31.275617 containerd[1495]: time="2024-12-16T09:39:31.273973532Z" level=info msg="StartContainer for \"f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca\" returns successfully" Dec 16 09:39:31.288775 containerd[1495]: time="2024-12-16T09:39:31.287481845Z" level=info msg="StartContainer for \"87e02cfc0f5abdff6cfe82755e675e53eaea7049e1a62975889cc1d64863eeeb\" returns successfully" Dec 16 09:39:31.369935 kubelet[2353]: W1216 09:39:31.369860 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://5.75.242.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.242.71:6443: connect: connection refused Dec 16 09:39:31.370199 kubelet[2353]: E1216 09:39:31.370157 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://5.75.242.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.242.71:6443: connect: connection refused" logger="UnhandledError" Dec 16 09:39:31.397501 kubelet[2353]: I1216 09:39:31.397145 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:31.397501 kubelet[2353]: E1216 09:39:31.397425 2353 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://5.75.242.71:6443/api/v1/nodes\": dial tcp 5.75.242.71:6443: connect: connection refused" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:32.899660 kubelet[2353]: E1216 09:39:32.899600 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-4-1bd0c0376a\" not found" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:32.999841 kubelet[2353]: I1216 09:39:32.999800 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:33.002291 kubelet[2353]: E1216 09:39:33.002187 2353 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-4-1bd0c0376a.18119ed5243993d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-1bd0c0376a,UID:ci-4081-2-1-4-1bd0c0376a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-1bd0c0376a,},FirstTimestamp:2024-12-16 09:39:29.787327443 +0000 UTC m=+0.475977076,LastTimestamp:2024-12-16 09:39:29.787327443 +0000 UTC m=+0.475977076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-1bd0c0376a,}" Dec 16 09:39:33.008088 kubelet[2353]: I1216 09:39:33.008047 2353 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:33.054987 kubelet[2353]: E1216 09:39:33.054872 2353 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-4-1bd0c0376a.18119ed526314b5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-1bd0c0376a,UID:ci-4081-2-1-4-1bd0c0376a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-1bd0c0376a,},FirstTimestamp:2024-12-16 09:39:29.820339035 +0000 UTC m=+0.508988669,LastTimestamp:2024-12-16 09:39:29.820339035 +0000 UTC m=+0.508988669,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-1bd0c0376a,}" Dec 16 09:39:33.108286 kubelet[2353]: E1216 09:39:33.108142 2353 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-4-1bd0c0376a.18119ed527604281 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-1bd0c0376a,UID:ci-4081-2-1-4-1bd0c0376a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-2-1-4-1bd0c0376a status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-1bd0c0376a,},FirstTimestamp:2024-12-16 09:39:29.840194177 +0000 UTC m=+0.528843811,LastTimestamp:2024-12-16 09:39:29.840194177 +0000 UTC m=+0.528843811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-1bd0c0376a,}" Dec 16 09:39:33.162381 kubelet[2353]: E1216 09:39:33.162053 2353 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-4-1bd0c0376a.18119ed527605d1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-1bd0c0376a,UID:ci-4081-2-1-4-1bd0c0376a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081-2-1-4-1bd0c0376a status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-1bd0c0376a,},FirstTimestamp:2024-12-16 09:39:29.84020099 +0000 UTC m=+0.528850624,LastTimestamp:2024-12-16 09:39:29.84020099 +0000 UTC m=+0.528850624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-1bd0c0376a,}" Dec 16 09:39:33.219769 kubelet[2353]: E1216 09:39:33.219502 2353 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-4-1bd0c0376a.18119ed5276069e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4-1bd0c0376a,UID:ci-4081-2-1-4-1bd0c0376a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ci-4081-2-1-4-1bd0c0376a status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-1bd0c0376a,},FirstTimestamp:2024-12-16 09:39:29.840204256 +0000 UTC m=+0.528853890,LastTimestamp:2024-12-16 09:39:29.840204256 +0000 UTC m=+0.528853890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-1bd0c0376a,}" Dec 16 09:39:33.778830 kubelet[2353]: I1216 09:39:33.778763 2353 apiserver.go:52] "Watching apiserver" Dec 16 09:39:33.807817 kubelet[2353]: I1216 09:39:33.807641 2353 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 09:39:35.285897 systemd[1]: Reloading requested from client PID 2624 ('systemctl') (unit session-7.scope)... Dec 16 09:39:35.285953 systemd[1]: Reloading... Dec 16 09:39:35.441776 zram_generator::config[2665]: No configuration found. Dec 16 09:39:35.583746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:39:35.711369 systemd[1]: Reloading finished in 424 ms. Dec 16 09:39:35.766750 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:35.795721 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 09:39:35.796315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:35.796372 systemd[1]: kubelet.service: Consumed 1.007s CPU time, 112.7M memory peak, 0B memory swap peak. Dec 16 09:39:35.809070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:35.988923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:35.991979 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 09:39:36.046569 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:36.048354 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 09:39:36.048354 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:36.048354 kubelet[2715]: I1216 09:39:36.047029 2715 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 09:39:36.060555 kubelet[2715]: I1216 09:39:36.060514 2715 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 16 09:39:36.060555 kubelet[2715]: I1216 09:39:36.060563 2715 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 09:39:36.061792 kubelet[2715]: I1216 09:39:36.060952 2715 server.go:929] "Client rotation is on, will bootstrap in background" Dec 16 09:39:36.062848 kubelet[2715]: I1216 09:39:36.062795 2715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 09:39:36.075676 kubelet[2715]: I1216 09:39:36.075103 2715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 09:39:36.080624 kubelet[2715]: E1216 09:39:36.080537 2715 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 16 09:39:36.080859 kubelet[2715]: I1216 09:39:36.080845 2715 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 16 09:39:36.085630 kubelet[2715]: I1216 09:39:36.085587 2715 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 09:39:36.086587 kubelet[2715]: I1216 09:39:36.086561 2715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 16 09:39:36.086729 kubelet[2715]: I1216 09:39:36.086695 2715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 09:39:36.086916 kubelet[2715]: I1216 09:39:36.086745 2715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-4-1bd0c0376a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 09:39:36.086994 kubelet[2715]: I1216 09:39:36.086918 2715 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 09:39:36.086994 kubelet[2715]: I1216 09:39:36.086927 2715 container_manager_linux.go:300] "Creating device plugin manager" Dec 16 09:39:36.089461 kubelet[2715]: I1216 09:39:36.089418 2715 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:36.092173 kubelet[2715]: I1216 09:39:36.092152 2715 kubelet.go:408] "Attempting to sync node with API server" Dec 16 09:39:36.092173 kubelet[2715]: I1216 09:39:36.092170 2715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 09:39:36.093098 kubelet[2715]: I1216 09:39:36.092872 2715 kubelet.go:314] "Adding apiserver pod source" Dec 16 09:39:36.093098 kubelet[2715]: I1216 09:39:36.092894 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 09:39:36.099844 kubelet[2715]: I1216 09:39:36.099768 2715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 16 09:39:36.100927 kubelet[2715]: I1216 09:39:36.100833 2715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 09:39:36.111767 kubelet[2715]: I1216 09:39:36.111504 2715 server.go:1269] "Started kubelet" Dec 16 09:39:36.115929 kubelet[2715]: I1216 09:39:36.115640 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 09:39:36.116479 kubelet[2715]: I1216 09:39:36.116317 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 09:39:36.117156 kubelet[2715]: I1216 09:39:36.116743 2715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 09:39:36.117156 kubelet[2715]: I1216 09:39:36.116802 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 09:39:36.118481 kubelet[2715]: I1216 09:39:36.118264 2715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 09:39:36.118703 kubelet[2715]: I1216 09:39:36.118691 2715 server.go:460] "Adding debug handlers to kubelet server" Dec 16 09:39:36.120982 kubelet[2715]: I1216 09:39:36.120764 2715 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 16 09:39:36.125556 kubelet[2715]: I1216 09:39:36.125524 2715 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 09:39:36.125748 kubelet[2715]: I1216 09:39:36.125710 2715 reconciler.go:26] "Reconciler: start to sync state" Dec 16 09:39:36.127699 kubelet[2715]: I1216 09:39:36.127674 2715 factory.go:221] Registration of the systemd container factory successfully Dec 16 09:39:36.130643 kubelet[2715]: I1216 09:39:36.130280 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 09:39:36.132180 kubelet[2715]: E1216 09:39:36.132018 2715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 09:39:36.133410 kubelet[2715]: I1216 09:39:36.133374 2715 factory.go:221] Registration of the containerd container factory successfully Dec 16 09:39:36.138877 kubelet[2715]: I1216 09:39:36.138833 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 09:39:36.140348 kubelet[2715]: I1216 09:39:36.140082 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 09:39:36.140348 kubelet[2715]: I1216 09:39:36.140112 2715 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 09:39:36.140348 kubelet[2715]: I1216 09:39:36.140131 2715 kubelet.go:2321] "Starting kubelet main sync loop" Dec 16 09:39:36.140348 kubelet[2715]: E1216 09:39:36.140167 2715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 09:39:36.192655 kubelet[2715]: I1216 09:39:36.192622 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 16 09:39:36.192655 kubelet[2715]: I1216 09:39:36.192642 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 16 09:39:36.192655 kubelet[2715]: I1216 09:39:36.192661 2715 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:36.192885 kubelet[2715]: I1216 09:39:36.192860 2715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 09:39:36.192916 kubelet[2715]: I1216 09:39:36.192873 2715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 09:39:36.192916 kubelet[2715]: I1216 09:39:36.192896 2715 policy_none.go:49] "None policy: Start" Dec 16 09:39:36.193797 kubelet[2715]: I1216 09:39:36.193718 2715 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 09:39:36.193797 kubelet[2715]: I1216 09:39:36.193757 2715 state_mem.go:35] "Initializing new in-memory state store" Dec 16 09:39:36.193985 kubelet[2715]: I1216 09:39:36.193959 2715 state_mem.go:75] "Updated machine memory state" Dec 16 09:39:36.199671 kubelet[2715]: I1216 09:39:36.199628 2715 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 09:39:36.199846 kubelet[2715]: I1216 09:39:36.199821 2715 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 09:39:36.199909 kubelet[2715]: I1216 09:39:36.199838 2715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 09:39:36.205351 kubelet[2715]: I1216 09:39:36.203030 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 09:39:36.320690 kubelet[2715]: I1216 09:39:36.320655 2715 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326569 kubelet[2715]: I1216 09:39:36.326247 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326569 kubelet[2715]: I1216 09:39:36.326287 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326569 kubelet[2715]: I1216 09:39:36.326309 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20180db9af20cd9bb9cbea26130f55c7-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-4-1bd0c0376a\" (UID: \"20180db9af20cd9bb9cbea26130f55c7\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326569 kubelet[2715]: I1216 09:39:36.326330 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20180db9af20cd9bb9cbea26130f55c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-4-1bd0c0376a\" (UID: \"20180db9af20cd9bb9cbea26130f55c7\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326569 kubelet[2715]: I1216 09:39:36.326354 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326831 kubelet[2715]: I1216 09:39:36.326373 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326831 kubelet[2715]: I1216 09:39:36.326393 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16b495e66112052dd810ce4d0f927eb1-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-4-1bd0c0376a\" (UID: \"16b495e66112052dd810ce4d0f927eb1\") " pod="kube-system/kube-scheduler-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326831 kubelet[2715]: I1216 09:39:36.326412 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20180db9af20cd9bb9cbea26130f55c7-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-4-1bd0c0376a\" (UID: \"20180db9af20cd9bb9cbea26130f55c7\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.326831 kubelet[2715]: I1216 09:39:36.326445 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a064ac7e6c0462d763cef4421ab86d8-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-4-1bd0c0376a\" (UID: \"6a064ac7e6c0462d763cef4421ab86d8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.328535 kubelet[2715]: I1216 09:39:36.328492 2715 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:36.328599 kubelet[2715]: I1216 09:39:36.328563 2715 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:39:37.103325 kubelet[2715]: I1216 09:39:37.103072 2715 apiserver.go:52] "Watching apiserver" Dec 16 09:39:37.126922 kubelet[2715]: I1216 09:39:37.126835 2715 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 09:39:37.242389 kubelet[2715]: I1216 09:39:37.241929 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-4-1bd0c0376a" podStartSLOduration=1.241908205 podStartE2EDuration="1.241908205s" podCreationTimestamp="2024-12-16 09:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:37.219096529 +0000 UTC m=+1.220552167" watchObservedRunningTime="2024-12-16 09:39:37.241908205 +0000 UTC m=+1.243363842" Dec 16 09:39:37.257289 kubelet[2715]: I1216 09:39:37.257217 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-4-1bd0c0376a" podStartSLOduration=1.257197807 podStartE2EDuration="1.257197807s" podCreationTimestamp="2024-12-16 09:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:37.242700994 +0000 UTC m=+1.244156651" watchObservedRunningTime="2024-12-16 09:39:37.257197807 +0000 UTC m=+1.258653444" Dec 16 09:39:37.270160 kubelet[2715]: I1216 09:39:37.269979 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-4-1bd0c0376a" podStartSLOduration=1.269957786 podStartE2EDuration="1.269957786s" podCreationTimestamp="2024-12-16 09:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:37.257137092 +0000 UTC m=+1.258592739" watchObservedRunningTime="2024-12-16 09:39:37.269957786 +0000 UTC m=+1.271413423" Dec 16 09:39:40.234101 kubelet[2715]: I1216 09:39:40.234029 2715 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 09:39:40.236049 containerd[1495]: time="2024-12-16T09:39:40.235197209Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 09:39:40.238581 kubelet[2715]: I1216 09:39:40.236091 2715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 09:39:41.096630 sudo[1889]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:41.258085 sshd[1886]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:41.266189 kubelet[2715]: I1216 09:39:41.265422 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/950984f4-d196-4dc8-9287-8258e24ea0a2-xtables-lock\") pod \"kube-proxy-t8vl8\" (UID: \"950984f4-d196-4dc8-9287-8258e24ea0a2\") " pod="kube-system/kube-proxy-t8vl8" Dec 16 09:39:41.266189 kubelet[2715]: I1216 09:39:41.265471 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/950984f4-d196-4dc8-9287-8258e24ea0a2-lib-modules\") pod \"kube-proxy-t8vl8\" (UID: \"950984f4-d196-4dc8-9287-8258e24ea0a2\") " pod="kube-system/kube-proxy-t8vl8" Dec 16 09:39:41.266189 kubelet[2715]: I1216 09:39:41.265499 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/950984f4-d196-4dc8-9287-8258e24ea0a2-kube-proxy\") pod \"kube-proxy-t8vl8\" (UID: \"950984f4-d196-4dc8-9287-8258e24ea0a2\") " pod="kube-system/kube-proxy-t8vl8" Dec 16 09:39:41.266189 kubelet[2715]: I1216 09:39:41.265524 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs6c8\" (UniqueName: \"kubernetes.io/projected/950984f4-d196-4dc8-9287-8258e24ea0a2-kube-api-access-rs6c8\") pod \"kube-proxy-t8vl8\" (UID: \"950984f4-d196-4dc8-9287-8258e24ea0a2\") " pod="kube-system/kube-proxy-t8vl8" Dec 16 09:39:41.267357 systemd[1]: sshd@6-5.75.242.71:22-147.75.109.163:54292.service: Deactivated successfully. Dec 16 09:39:41.276571 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 09:39:41.277656 systemd[1]: session-7.scope: Consumed 5.368s CPU time, 145.7M memory peak, 0B memory swap peak. Dec 16 09:39:41.282134 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Dec 16 09:39:41.292867 systemd[1]: Created slice kubepods-besteffort-pod950984f4_d196_4dc8_9287_8258e24ea0a2.slice - libcontainer container kubepods-besteffort-pod950984f4_d196_4dc8_9287_8258e24ea0a2.slice. Dec 16 09:39:41.293092 systemd-logind[1475]: Removed session 7. Dec 16 09:39:41.414365 systemd[1]: Created slice kubepods-besteffort-podf961a2b5_dee8_4f4e_a623_80df9c16db21.slice - libcontainer container kubepods-besteffort-podf961a2b5_dee8_4f4e_a623_80df9c16db21.slice. Dec 16 09:39:41.467587 kubelet[2715]: I1216 09:39:41.467512 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpd26\" (UniqueName: \"kubernetes.io/projected/f961a2b5-dee8-4f4e-a623-80df9c16db21-kube-api-access-tpd26\") pod \"tigera-operator-76c4976dd7-898kn\" (UID: \"f961a2b5-dee8-4f4e-a623-80df9c16db21\") " pod="tigera-operator/tigera-operator-76c4976dd7-898kn" Dec 16 09:39:41.467587 kubelet[2715]: I1216 09:39:41.467583 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f961a2b5-dee8-4f4e-a623-80df9c16db21-var-lib-calico\") pod \"tigera-operator-76c4976dd7-898kn\" (UID: \"f961a2b5-dee8-4f4e-a623-80df9c16db21\") " pod="tigera-operator/tigera-operator-76c4976dd7-898kn" Dec 16 09:39:41.609026 containerd[1495]: time="2024-12-16T09:39:41.608269364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8vl8,Uid:950984f4-d196-4dc8-9287-8258e24ea0a2,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:41.656330 containerd[1495]: time="2024-12-16T09:39:41.656178811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:41.656588 containerd[1495]: time="2024-12-16T09:39:41.656536790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:41.657688 containerd[1495]: time="2024-12-16T09:39:41.657640994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:41.658127 containerd[1495]: time="2024-12-16T09:39:41.657978696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:41.695077 systemd[1]: Started cri-containerd-b6bd63f9661e6ad019e5897b3ffd801ed4ff834b001ed440f7ba4c7fbea78c3b.scope - libcontainer container b6bd63f9661e6ad019e5897b3ffd801ed4ff834b001ed440f7ba4c7fbea78c3b. Dec 16 09:39:41.717798 containerd[1495]: time="2024-12-16T09:39:41.717720417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-898kn,Uid:f961a2b5-dee8-4f4e-a623-80df9c16db21,Namespace:tigera-operator,Attempt:0,}" Dec 16 09:39:41.727345 containerd[1495]: time="2024-12-16T09:39:41.727125447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8vl8,Uid:950984f4-d196-4dc8-9287-8258e24ea0a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6bd63f9661e6ad019e5897b3ffd801ed4ff834b001ed440f7ba4c7fbea78c3b\"" Dec 16 09:39:41.733198 containerd[1495]: time="2024-12-16T09:39:41.732719739Z" level=info msg="CreateContainer within sandbox \"b6bd63f9661e6ad019e5897b3ffd801ed4ff834b001ed440f7ba4c7fbea78c3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 09:39:41.747960 containerd[1495]: time="2024-12-16T09:39:41.747839189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:41.747960 containerd[1495]: time="2024-12-16T09:39:41.747910553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:41.747960 containerd[1495]: time="2024-12-16T09:39:41.747921655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:41.748432 containerd[1495]: time="2024-12-16T09:39:41.748015383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:41.760059 containerd[1495]: time="2024-12-16T09:39:41.759903042Z" level=info msg="CreateContainer within sandbox \"b6bd63f9661e6ad019e5897b3ffd801ed4ff834b001ed440f7ba4c7fbea78c3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"369b4b094cb709d6e67fc019ebd41b8c1700caff554e15cf201c4726b447119e\"" Dec 16 09:39:41.763654 containerd[1495]: time="2024-12-16T09:39:41.761891263Z" level=info msg="StartContainer for \"369b4b094cb709d6e67fc019ebd41b8c1700caff554e15cf201c4726b447119e\"" Dec 16 09:39:41.772275 systemd[1]: Started cri-containerd-41742806cb1ef29e93d7c2b533c2ad8ca4030f0505e49358bafe9f371dcd2fc6.scope - libcontainer container 41742806cb1ef29e93d7c2b533c2ad8ca4030f0505e49358bafe9f371dcd2fc6. Dec 16 09:39:41.802258 systemd[1]: Started cri-containerd-369b4b094cb709d6e67fc019ebd41b8c1700caff554e15cf201c4726b447119e.scope - libcontainer container 369b4b094cb709d6e67fc019ebd41b8c1700caff554e15cf201c4726b447119e. Dec 16 09:39:41.835178 containerd[1495]: time="2024-12-16T09:39:41.834800491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-898kn,Uid:f961a2b5-dee8-4f4e-a623-80df9c16db21,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"41742806cb1ef29e93d7c2b533c2ad8ca4030f0505e49358bafe9f371dcd2fc6\"" Dec 16 09:39:41.837569 containerd[1495]: time="2024-12-16T09:39:41.837538917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 16 09:39:41.846076 containerd[1495]: time="2024-12-16T09:39:41.846025535Z" level=info msg="StartContainer for \"369b4b094cb709d6e67fc019ebd41b8c1700caff554e15cf201c4726b447119e\" returns successfully" Dec 16 09:39:42.205026 kubelet[2715]: I1216 09:39:42.204946 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t8vl8" podStartSLOduration=1.204923922 podStartE2EDuration="1.204923922s" podCreationTimestamp="2024-12-16 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:42.204670072 +0000 UTC m=+6.206125719" watchObservedRunningTime="2024-12-16 09:39:42.204923922 +0000 UTC m=+6.206379580" Dec 16 09:39:42.404561 systemd[1]: run-containerd-runc-k8s.io-b6bd63f9661e6ad019e5897b3ffd801ed4ff834b001ed440f7ba4c7fbea78c3b-runc.SHYtu6.mount: Deactivated successfully. Dec 16 09:39:44.002303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703935099.mount: Deactivated successfully. Dec 16 09:39:46.693442 containerd[1495]: time="2024-12-16T09:39:46.693348849Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:46.694721 containerd[1495]: time="2024-12-16T09:39:46.694448372Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764285" Dec 16 09:39:46.695914 containerd[1495]: time="2024-12-16T09:39:46.695874115Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:46.698231 containerd[1495]: time="2024-12-16T09:39:46.698153704Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:46.698876 containerd[1495]: time="2024-12-16T09:39:46.698842324Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.861045028s" Dec 16 09:39:46.698922 containerd[1495]: time="2024-12-16T09:39:46.698881318Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 16 09:39:46.702410 containerd[1495]: time="2024-12-16T09:39:46.702349519Z" level=info msg="CreateContainer within sandbox \"41742806cb1ef29e93d7c2b533c2ad8ca4030f0505e49358bafe9f371dcd2fc6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 09:39:46.720317 containerd[1495]: time="2024-12-16T09:39:46.720154927Z" level=info msg="CreateContainer within sandbox \"41742806cb1ef29e93d7c2b533c2ad8ca4030f0505e49358bafe9f371dcd2fc6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053\"" Dec 16 09:39:46.721356 containerd[1495]: time="2024-12-16T09:39:46.721315456Z" level=info msg="StartContainer for \"887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053\"" Dec 16 09:39:46.761996 systemd[1]: Started cri-containerd-887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053.scope - libcontainer container 887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053. Dec 16 09:39:46.796000 containerd[1495]: time="2024-12-16T09:39:46.795955806Z" level=info msg="StartContainer for \"887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053\" returns successfully" Dec 16 09:39:47.873911 kubelet[2715]: I1216 09:39:47.873819 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-898kn" podStartSLOduration=2.010582762 podStartE2EDuration="6.87379066s" podCreationTimestamp="2024-12-16 09:39:41 +0000 UTC" firstStartedPulling="2024-12-16 09:39:41.836762944 +0000 UTC m=+5.838218581" lastFinishedPulling="2024-12-16 09:39:46.699970842 +0000 UTC m=+10.701426479" observedRunningTime="2024-12-16 09:39:47.21220051 +0000 UTC m=+11.213656177" watchObservedRunningTime="2024-12-16 09:39:47.87379066 +0000 UTC m=+11.875246338" Dec 16 09:39:49.868003 systemd[1]: Created slice kubepods-besteffort-pod180f1fe1_23f0_4551_b070_27f6a4652cc4.slice - libcontainer container kubepods-besteffort-pod180f1fe1_23f0_4551_b070_27f6a4652cc4.slice. Dec 16 09:39:49.924029 kubelet[2715]: I1216 09:39:49.923965 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/180f1fe1-23f0-4551-b070-27f6a4652cc4-typha-certs\") pod \"calico-typha-ccb8f66f8-ls787\" (UID: \"180f1fe1-23f0-4551-b070-27f6a4652cc4\") " pod="calico-system/calico-typha-ccb8f66f8-ls787" Dec 16 09:39:49.924484 kubelet[2715]: I1216 09:39:49.924057 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcls5\" (UniqueName: \"kubernetes.io/projected/180f1fe1-23f0-4551-b070-27f6a4652cc4-kube-api-access-xcls5\") pod \"calico-typha-ccb8f66f8-ls787\" (UID: \"180f1fe1-23f0-4551-b070-27f6a4652cc4\") " pod="calico-system/calico-typha-ccb8f66f8-ls787" Dec 16 09:39:49.924484 kubelet[2715]: I1216 09:39:49.924078 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/180f1fe1-23f0-4551-b070-27f6a4652cc4-tigera-ca-bundle\") pod \"calico-typha-ccb8f66f8-ls787\" (UID: \"180f1fe1-23f0-4551-b070-27f6a4652cc4\") " pod="calico-system/calico-typha-ccb8f66f8-ls787" Dec 16 09:39:49.986123 systemd[1]: Created slice kubepods-besteffort-pod0efe44f3_b6a3_424b_bdbe_df0a131921ef.slice - libcontainer container kubepods-besteffort-pod0efe44f3_b6a3_424b_bdbe_df0a131921ef.slice. Dec 16 09:39:50.024284 kubelet[2715]: I1216 09:39:50.024223 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-xtables-lock\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024472 kubelet[2715]: I1216 09:39:50.024300 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-run-calico\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024472 kubelet[2715]: I1216 09:39:50.024319 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-log-dir\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024472 kubelet[2715]: I1216 09:39:50.024334 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0efe44f3-b6a3-424b-bdbe-df0a131921ef-tigera-ca-bundle\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024472 kubelet[2715]: I1216 09:39:50.024348 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-policysync\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024472 kubelet[2715]: I1216 09:39:50.024362 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgnvq\" (UniqueName: \"kubernetes.io/projected/0efe44f3-b6a3-424b-bdbe-df0a131921ef-kube-api-access-mgnvq\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024646 kubelet[2715]: I1216 09:39:50.024387 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-flexvol-driver-host\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024646 kubelet[2715]: I1216 09:39:50.024404 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-lib-modules\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024646 kubelet[2715]: I1216 09:39:50.024416 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-net-dir\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024646 kubelet[2715]: I1216 09:39:50.024430 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0efe44f3-b6a3-424b-bdbe-df0a131921ef-node-certs\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024646 kubelet[2715]: I1216 09:39:50.024452 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-lib-calico\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.024827 kubelet[2715]: I1216 09:39:50.024474 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-bin-dir\") pod \"calico-node-w9ksm\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " pod="calico-system/calico-node-w9ksm" Dec 16 09:39:50.129139 kubelet[2715]: E1216 09:39:50.129033 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.129337 kubelet[2715]: W1216 09:39:50.129223 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.129337 kubelet[2715]: E1216 09:39:50.129245 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.137673 kubelet[2715]: E1216 09:39:50.137649 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.137874 kubelet[2715]: W1216 09:39:50.137757 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.137874 kubelet[2715]: E1216 09:39:50.137782 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.146632 kubelet[2715]: E1216 09:39:50.146592 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.146632 kubelet[2715]: W1216 09:39:50.146628 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.146789 kubelet[2715]: E1216 09:39:50.146653 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.177783 containerd[1495]: time="2024-12-16T09:39:50.177737541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ccb8f66f8-ls787,Uid:180f1fe1-23f0-4551-b070-27f6a4652cc4,Namespace:calico-system,Attempt:0,}" Dec 16 09:39:50.208626 kubelet[2715]: E1216 09:39:50.208521 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:39:50.222776 containerd[1495]: time="2024-12-16T09:39:50.222407768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:50.223157 containerd[1495]: time="2024-12-16T09:39:50.223048199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:50.223297 containerd[1495]: time="2024-12-16T09:39:50.223145976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:50.224399 containerd[1495]: time="2024-12-16T09:39:50.224310968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:50.226256 kubelet[2715]: E1216 09:39:50.226053 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.226256 kubelet[2715]: W1216 09:39:50.226074 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.226256 kubelet[2715]: E1216 09:39:50.226204 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.227308 kubelet[2715]: E1216 09:39:50.227125 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.227308 kubelet[2715]: W1216 09:39:50.227136 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.227308 kubelet[2715]: E1216 09:39:50.227145 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.228475 kubelet[2715]: E1216 09:39:50.228328 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.228475 kubelet[2715]: W1216 09:39:50.228338 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.228475 kubelet[2715]: E1216 09:39:50.228347 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.230855 kubelet[2715]: E1216 09:39:50.230131 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.230855 kubelet[2715]: W1216 09:39:50.230163 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.230855 kubelet[2715]: E1216 09:39:50.230177 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.230855 kubelet[2715]: E1216 09:39:50.230463 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.230855 kubelet[2715]: W1216 09:39:50.230471 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.230855 kubelet[2715]: E1216 09:39:50.230480 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.230855 kubelet[2715]: E1216 09:39:50.230693 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.230855 kubelet[2715]: W1216 09:39:50.230701 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.230855 kubelet[2715]: E1216 09:39:50.230710 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.231344 kubelet[2715]: E1216 09:39:50.231293 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.231344 kubelet[2715]: W1216 09:39:50.231303 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.231344 kubelet[2715]: E1216 09:39:50.231313 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.232099 kubelet[2715]: E1216 09:39:50.231803 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.232099 kubelet[2715]: W1216 09:39:50.231814 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.232099 kubelet[2715]: E1216 09:39:50.231937 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.233290 kubelet[2715]: E1216 09:39:50.233134 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.233290 kubelet[2715]: W1216 09:39:50.233164 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.233290 kubelet[2715]: E1216 09:39:50.233194 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.234264 kubelet[2715]: E1216 09:39:50.234092 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.234264 kubelet[2715]: W1216 09:39:50.234107 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.234264 kubelet[2715]: E1216 09:39:50.234117 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.235985 kubelet[2715]: E1216 09:39:50.235939 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.235985 kubelet[2715]: W1216 09:39:50.235962 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.235985 kubelet[2715]: E1216 09:39:50.235973 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.236398 kubelet[2715]: E1216 09:39:50.236167 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.236398 kubelet[2715]: W1216 09:39:50.236179 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.236398 kubelet[2715]: E1216 09:39:50.236187 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.236593 kubelet[2715]: E1216 09:39:50.236426 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.236593 kubelet[2715]: W1216 09:39:50.236436 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.236593 kubelet[2715]: E1216 09:39:50.236447 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.238002 kubelet[2715]: E1216 09:39:50.237792 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.238002 kubelet[2715]: W1216 09:39:50.237807 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.238002 kubelet[2715]: E1216 09:39:50.237816 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.238002 kubelet[2715]: E1216 09:39:50.237999 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.238002 kubelet[2715]: W1216 09:39:50.238008 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.238149 kubelet[2715]: E1216 09:39:50.238017 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.238514 kubelet[2715]: E1216 09:39:50.238275 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.238514 kubelet[2715]: W1216 09:39:50.238291 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.238514 kubelet[2715]: E1216 09:39:50.238300 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.239189 kubelet[2715]: E1216 09:39:50.239080 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.239189 kubelet[2715]: W1216 09:39:50.239098 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.239189 kubelet[2715]: E1216 09:39:50.239108 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.240149 kubelet[2715]: E1216 09:39:50.239816 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.240149 kubelet[2715]: W1216 09:39:50.239830 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.240149 kubelet[2715]: E1216 09:39:50.239840 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.240706 kubelet[2715]: E1216 09:39:50.240682 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.240706 kubelet[2715]: W1216 09:39:50.240699 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.240807 kubelet[2715]: E1216 09:39:50.240710 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.241387 kubelet[2715]: E1216 09:39:50.241341 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.241387 kubelet[2715]: W1216 09:39:50.241357 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.241387 kubelet[2715]: E1216 09:39:50.241384 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.244786 kubelet[2715]: E1216 09:39:50.243864 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.244786 kubelet[2715]: W1216 09:39:50.243883 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.244786 kubelet[2715]: E1216 09:39:50.243895 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.244786 kubelet[2715]: I1216 09:39:50.244267 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h987\" (UniqueName: \"kubernetes.io/projected/b184f5aa-f13a-4907-82b2-11f9a166985b-kube-api-access-6h987\") pod \"csi-node-driver-s6lhq\" (UID: \"b184f5aa-f13a-4907-82b2-11f9a166985b\") " pod="calico-system/csi-node-driver-s6lhq" Dec 16 09:39:50.247551 kubelet[2715]: E1216 09:39:50.245117 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.247551 kubelet[2715]: W1216 09:39:50.245136 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.247551 kubelet[2715]: E1216 09:39:50.245247 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.247551 kubelet[2715]: I1216 09:39:50.245264 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b184f5aa-f13a-4907-82b2-11f9a166985b-registration-dir\") pod \"csi-node-driver-s6lhq\" (UID: \"b184f5aa-f13a-4907-82b2-11f9a166985b\") " pod="calico-system/csi-node-driver-s6lhq" Dec 16 09:39:50.247551 kubelet[2715]: E1216 09:39:50.246131 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.247551 kubelet[2715]: W1216 09:39:50.246142 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.247551 kubelet[2715]: E1216 09:39:50.246263 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.247551 kubelet[2715]: I1216 09:39:50.246291 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b184f5aa-f13a-4907-82b2-11f9a166985b-varrun\") pod \"csi-node-driver-s6lhq\" (UID: \"b184f5aa-f13a-4907-82b2-11f9a166985b\") " pod="calico-system/csi-node-driver-s6lhq" Dec 16 09:39:50.247551 kubelet[2715]: E1216 09:39:50.247280 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.247830 kubelet[2715]: W1216 09:39:50.247290 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.247830 kubelet[2715]: E1216 09:39:50.247555 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.247830 kubelet[2715]: I1216 09:39:50.247578 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b184f5aa-f13a-4907-82b2-11f9a166985b-kubelet-dir\") pod \"csi-node-driver-s6lhq\" (UID: \"b184f5aa-f13a-4907-82b2-11f9a166985b\") " pod="calico-system/csi-node-driver-s6lhq" Dec 16 09:39:50.248503 kubelet[2715]: E1216 09:39:50.248016 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.248503 kubelet[2715]: W1216 09:39:50.248029 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.248503 kubelet[2715]: E1216 09:39:50.248357 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.249967 kubelet[2715]: E1216 09:39:50.248940 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.249967 kubelet[2715]: W1216 09:39:50.248955 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.249967 kubelet[2715]: E1216 09:39:50.249103 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.249967 kubelet[2715]: E1216 09:39:50.249616 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.249967 kubelet[2715]: W1216 09:39:50.249630 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.250466 kubelet[2715]: E1216 09:39:50.249991 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.252045 kubelet[2715]: E1216 09:39:50.251991 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.252045 kubelet[2715]: W1216 09:39:50.252005 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.255296 kubelet[2715]: E1216 09:39:50.252092 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.255296 kubelet[2715]: I1216 09:39:50.252110 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b184f5aa-f13a-4907-82b2-11f9a166985b-socket-dir\") pod \"csi-node-driver-s6lhq\" (UID: \"b184f5aa-f13a-4907-82b2-11f9a166985b\") " pod="calico-system/csi-node-driver-s6lhq" Dec 16 09:39:50.255296 kubelet[2715]: E1216 09:39:50.252967 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.255296 kubelet[2715]: W1216 09:39:50.252976 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.255296 kubelet[2715]: E1216 09:39:50.253689 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.255296 kubelet[2715]: E1216 09:39:50.254182 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.255296 kubelet[2715]: W1216 09:39:50.254192 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.255296 kubelet[2715]: E1216 09:39:50.254203 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.255296 kubelet[2715]: E1216 09:39:50.255030 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.255553 kubelet[2715]: W1216 09:39:50.255039 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.255553 kubelet[2715]: E1216 09:39:50.255066 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.256634 kubelet[2715]: E1216 09:39:50.255826 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.256634 kubelet[2715]: W1216 09:39:50.255849 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.256634 kubelet[2715]: E1216 09:39:50.255859 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.258385 kubelet[2715]: E1216 09:39:50.258344 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.258385 kubelet[2715]: W1216 09:39:50.258359 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.258385 kubelet[2715]: E1216 09:39:50.258385 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.259652 kubelet[2715]: E1216 09:39:50.259629 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.259652 kubelet[2715]: W1216 09:39:50.259645 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.259652 kubelet[2715]: E1216 09:39:50.259654 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.260812 kubelet[2715]: E1216 09:39:50.260026 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.260812 kubelet[2715]: W1216 09:39:50.260034 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.260812 kubelet[2715]: E1216 09:39:50.260044 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.259898 systemd[1]: Started cri-containerd-4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf.scope - libcontainer container 4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf. Dec 16 09:39:50.291262 containerd[1495]: time="2024-12-16T09:39:50.290851276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w9ksm,Uid:0efe44f3-b6a3-424b-bdbe-df0a131921ef,Namespace:calico-system,Attempt:0,}" Dec 16 09:39:50.336315 containerd[1495]: time="2024-12-16T09:39:50.336053146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:50.336315 containerd[1495]: time="2024-12-16T09:39:50.336127478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:50.336315 containerd[1495]: time="2024-12-16T09:39:50.336142697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:50.336315 containerd[1495]: time="2024-12-16T09:39:50.336240633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:50.355633 kubelet[2715]: E1216 09:39:50.355438 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.355633 kubelet[2715]: W1216 09:39:50.355463 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.355633 kubelet[2715]: E1216 09:39:50.355488 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.356281 kubelet[2715]: E1216 09:39:50.356042 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.356281 kubelet[2715]: W1216 09:39:50.356053 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.356281 kubelet[2715]: E1216 09:39:50.356205 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.357220 kubelet[2715]: E1216 09:39:50.356942 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.357220 kubelet[2715]: W1216 09:39:50.356958 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.357220 kubelet[2715]: E1216 09:39:50.356973 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.358023 kubelet[2715]: E1216 09:39:50.357773 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.358023 kubelet[2715]: W1216 09:39:50.357785 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.358190 kubelet[2715]: E1216 09:39:50.358178 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.358931 kubelet[2715]: E1216 09:39:50.358889 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.358931 kubelet[2715]: W1216 09:39:50.358905 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.359436 kubelet[2715]: E1216 09:39:50.359199 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.359436 kubelet[2715]: W1216 09:39:50.359210 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.359436 kubelet[2715]: E1216 09:39:50.359154 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.359436 kubelet[2715]: E1216 09:39:50.359422 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.359962 kubelet[2715]: E1216 09:39:50.359805 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.359962 kubelet[2715]: W1216 09:39:50.359815 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.360192 kubelet[2715]: E1216 09:39:50.360146 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.361093 kubelet[2715]: E1216 09:39:50.361002 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.361093 kubelet[2715]: W1216 09:39:50.361013 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.361443 kubelet[2715]: E1216 09:39:50.361334 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.361443 kubelet[2715]: E1216 09:39:50.361424 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.361443 kubelet[2715]: W1216 09:39:50.361430 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.362769 kubelet[2715]: E1216 09:39:50.362148 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.362977 kubelet[2715]: E1216 09:39:50.362875 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.362977 kubelet[2715]: W1216 09:39:50.362885 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.363286 kubelet[2715]: E1216 09:39:50.363253 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.363930 kubelet[2715]: E1216 09:39:50.363889 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.363930 kubelet[2715]: W1216 09:39:50.363903 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.365044 kubelet[2715]: E1216 09:39:50.364121 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.365255 kubelet[2715]: E1216 09:39:50.365161 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.365255 kubelet[2715]: W1216 09:39:50.365172 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.365361 kubelet[2715]: E1216 09:39:50.365348 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.365559 kubelet[2715]: E1216 09:39:50.365548 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.365654 kubelet[2715]: W1216 09:39:50.365642 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.366005 kubelet[2715]: E1216 09:39:50.365989 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.366227 kubelet[2715]: E1216 09:39:50.366216 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.366285 kubelet[2715]: W1216 09:39:50.366275 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.366465 kubelet[2715]: E1216 09:39:50.366453 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.366866 kubelet[2715]: E1216 09:39:50.366855 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.367019 kubelet[2715]: W1216 09:39:50.367006 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.367380 kubelet[2715]: E1216 09:39:50.367331 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.367544 kubelet[2715]: E1216 09:39:50.367472 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.367544 kubelet[2715]: W1216 09:39:50.367501 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.367994 kubelet[2715]: E1216 09:39:50.367923 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.368847 kubelet[2715]: E1216 09:39:50.368236 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.368847 kubelet[2715]: W1216 09:39:50.368249 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.368923 kubelet[2715]: E1216 09:39:50.368876 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.369181 kubelet[2715]: E1216 09:39:50.369050 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.369181 kubelet[2715]: W1216 09:39:50.369063 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.369334 kubelet[2715]: E1216 09:39:50.369243 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.369834 kubelet[2715]: E1216 09:39:50.369794 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.369834 kubelet[2715]: W1216 09:39:50.369811 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.370798 kubelet[2715]: E1216 09:39:50.370179 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.371519 kubelet[2715]: E1216 09:39:50.371493 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.371519 kubelet[2715]: W1216 09:39:50.371513 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.371661 kubelet[2715]: E1216 09:39:50.371632 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.372651 kubelet[2715]: E1216 09:39:50.372625 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.372651 kubelet[2715]: W1216 09:39:50.372648 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.373025 kubelet[2715]: E1216 09:39:50.372981 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.373617 kubelet[2715]: E1216 09:39:50.373579 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.373617 kubelet[2715]: W1216 09:39:50.373600 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.374645 kubelet[2715]: E1216 09:39:50.374522 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.374705 kubelet[2715]: E1216 09:39:50.374684 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.374705 kubelet[2715]: W1216 09:39:50.374694 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.375604 kubelet[2715]: E1216 09:39:50.374768 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.375604 kubelet[2715]: E1216 09:39:50.375319 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.375604 kubelet[2715]: W1216 09:39:50.375330 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.375604 kubelet[2715]: E1216 09:39:50.375342 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.376772 kubelet[2715]: E1216 09:39:50.376740 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.376772 kubelet[2715]: W1216 09:39:50.376765 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.376857 kubelet[2715]: E1216 09:39:50.376779 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.387218 containerd[1495]: time="2024-12-16T09:39:50.386114817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ccb8f66f8-ls787,Uid:180f1fe1-23f0-4551-b070-27f6a4652cc4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\"" Dec 16 09:39:50.396933 kubelet[2715]: E1216 09:39:50.394488 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:50.397765 kubelet[2715]: W1216 09:39:50.397099 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:50.397765 kubelet[2715]: E1216 09:39:50.397208 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:50.399329 systemd[1]: Started cri-containerd-fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81.scope - libcontainer container fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81. Dec 16 09:39:50.401368 containerd[1495]: time="2024-12-16T09:39:50.401322379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 16 09:39:50.446737 containerd[1495]: time="2024-12-16T09:39:50.446643808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w9ksm,Uid:0efe44f3-b6a3-424b-bdbe-df0a131921ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\"" Dec 16 09:39:52.046640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250083643.mount: Deactivated successfully. Dec 16 09:39:52.144606 kubelet[2715]: E1216 09:39:52.144529 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:39:52.862939 containerd[1495]: time="2024-12-16T09:39:52.862243114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:52.863597 containerd[1495]: time="2024-12-16T09:39:52.863566421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 16 09:39:52.864446 containerd[1495]: time="2024-12-16T09:39:52.864418047Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:52.866494 containerd[1495]: time="2024-12-16T09:39:52.866466790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:52.867154 containerd[1495]: time="2024-12-16T09:39:52.867119326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.465753525s" Dec 16 09:39:52.867245 containerd[1495]: time="2024-12-16T09:39:52.867217914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 16 09:39:52.875382 containerd[1495]: time="2024-12-16T09:39:52.875162726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 16 09:39:52.891007 containerd[1495]: time="2024-12-16T09:39:52.889546927Z" level=info msg="CreateContainer within sandbox \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 09:39:52.915961 containerd[1495]: time="2024-12-16T09:39:52.915768100Z" level=info msg="CreateContainer within sandbox \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\"" Dec 16 09:39:52.918763 containerd[1495]: time="2024-12-16T09:39:52.917230433Z" level=info msg="StartContainer for \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\"" Dec 16 09:39:52.953975 systemd[1]: Started cri-containerd-28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc.scope - libcontainer container 28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc. Dec 16 09:39:53.002439 containerd[1495]: time="2024-12-16T09:39:53.002365847Z" level=info msg="StartContainer for \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\" returns successfully" Dec 16 09:39:53.274314 kubelet[2715]: I1216 09:39:53.273578 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ccb8f66f8-ls787" podStartSLOduration=1.802558388 podStartE2EDuration="4.270945269s" podCreationTimestamp="2024-12-16 09:39:49 +0000 UTC" firstStartedPulling="2024-12-16 09:39:50.400627213 +0000 UTC m=+14.402082850" lastFinishedPulling="2024-12-16 09:39:52.869014095 +0000 UTC m=+16.870469731" observedRunningTime="2024-12-16 09:39:53.263804756 +0000 UTC m=+17.265260423" watchObservedRunningTime="2024-12-16 09:39:53.270945269 +0000 UTC m=+17.272400947" Dec 16 09:39:53.279533 kubelet[2715]: E1216 09:39:53.279448 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.279705 kubelet[2715]: W1216 09:39:53.279563 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.279705 kubelet[2715]: E1216 09:39:53.279603 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.280664 kubelet[2715]: E1216 09:39:53.280441 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.280664 kubelet[2715]: W1216 09:39:53.280465 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.280664 kubelet[2715]: E1216 09:39:53.280508 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.281435 kubelet[2715]: E1216 09:39:53.281051 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.281435 kubelet[2715]: W1216 09:39:53.281112 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.281435 kubelet[2715]: E1216 09:39:53.281132 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.282082 kubelet[2715]: E1216 09:39:53.281882 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.282082 kubelet[2715]: W1216 09:39:53.281905 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.282082 kubelet[2715]: E1216 09:39:53.281924 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.282376 kubelet[2715]: E1216 09:39:53.282300 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.282376 kubelet[2715]: W1216 09:39:53.282330 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.282376 kubelet[2715]: E1216 09:39:53.282356 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.283642 kubelet[2715]: E1216 09:39:53.283210 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.283642 kubelet[2715]: W1216 09:39:53.283229 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.283642 kubelet[2715]: E1216 09:39:53.283250 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.284714 kubelet[2715]: E1216 09:39:53.284647 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.284714 kubelet[2715]: W1216 09:39:53.284673 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.285124 kubelet[2715]: E1216 09:39:53.284956 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.285845 kubelet[2715]: E1216 09:39:53.285680 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.285845 kubelet[2715]: W1216 09:39:53.285705 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.285845 kubelet[2715]: E1216 09:39:53.285783 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.287316 kubelet[2715]: E1216 09:39:53.287150 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.287316 kubelet[2715]: W1216 09:39:53.287181 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.287316 kubelet[2715]: E1216 09:39:53.287196 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.287474 kubelet[2715]: E1216 09:39:53.287416 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.287474 kubelet[2715]: W1216 09:39:53.287430 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.287474 kubelet[2715]: E1216 09:39:53.287441 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.287665 kubelet[2715]: E1216 09:39:53.287638 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.287665 kubelet[2715]: W1216 09:39:53.287648 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.287665 kubelet[2715]: E1216 09:39:53.287658 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.287917 kubelet[2715]: E1216 09:39:53.287905 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.287917 kubelet[2715]: W1216 09:39:53.287915 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.288015 kubelet[2715]: E1216 09:39:53.287926 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.288256 kubelet[2715]: E1216 09:39:53.288230 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.288256 kubelet[2715]: W1216 09:39:53.288248 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.288338 kubelet[2715]: E1216 09:39:53.288259 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.288494 kubelet[2715]: E1216 09:39:53.288471 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.288494 kubelet[2715]: W1216 09:39:53.288483 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.288494 kubelet[2715]: E1216 09:39:53.288493 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.288708 kubelet[2715]: E1216 09:39:53.288685 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.288708 kubelet[2715]: W1216 09:39:53.288698 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.288708 kubelet[2715]: E1216 09:39:53.288708 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.289193 kubelet[2715]: E1216 09:39:53.289169 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.289193 kubelet[2715]: W1216 09:39:53.289187 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.289283 kubelet[2715]: E1216 09:39:53.289202 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.290679 kubelet[2715]: E1216 09:39:53.290048 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.290679 kubelet[2715]: W1216 09:39:53.290065 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.290679 kubelet[2715]: E1216 09:39:53.290088 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.291019 kubelet[2715]: E1216 09:39:53.290992 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.291077 kubelet[2715]: W1216 09:39:53.291051 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.291189 kubelet[2715]: E1216 09:39:53.291081 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.291403 kubelet[2715]: E1216 09:39:53.291364 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.291403 kubelet[2715]: W1216 09:39:53.291382 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.291502 kubelet[2715]: E1216 09:39:53.291407 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.292955 kubelet[2715]: E1216 09:39:53.292926 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.292955 kubelet[2715]: W1216 09:39:53.292948 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.293070 kubelet[2715]: E1216 09:39:53.292969 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.294757 kubelet[2715]: E1216 09:39:53.294092 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.294757 kubelet[2715]: W1216 09:39:53.294110 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.294757 kubelet[2715]: E1216 09:39:53.294213 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.295103 kubelet[2715]: E1216 09:39:53.295079 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.295103 kubelet[2715]: W1216 09:39:53.295097 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.295212 kubelet[2715]: E1216 09:39:53.295188 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.295893 kubelet[2715]: E1216 09:39:53.295847 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.295893 kubelet[2715]: W1216 09:39:53.295864 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.296197 kubelet[2715]: E1216 09:39:53.295973 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.296507 kubelet[2715]: E1216 09:39:53.296321 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.296507 kubelet[2715]: W1216 09:39:53.296334 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.296507 kubelet[2715]: E1216 09:39:53.296432 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.297143 kubelet[2715]: E1216 09:39:53.297119 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.297143 kubelet[2715]: W1216 09:39:53.297136 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.297760 kubelet[2715]: E1216 09:39:53.297504 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.297822 kubelet[2715]: E1216 09:39:53.297761 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.297822 kubelet[2715]: W1216 09:39:53.297772 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.297888 kubelet[2715]: E1216 09:39:53.297824 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.298145 kubelet[2715]: E1216 09:39:53.298111 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.298145 kubelet[2715]: W1216 09:39:53.298127 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.298227 kubelet[2715]: E1216 09:39:53.298184 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.298825 kubelet[2715]: E1216 09:39:53.298802 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.298825 kubelet[2715]: W1216 09:39:53.298820 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.299926 kubelet[2715]: E1216 09:39:53.298950 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.299926 kubelet[2715]: E1216 09:39:53.299472 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.299926 kubelet[2715]: W1216 09:39:53.299483 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.299926 kubelet[2715]: E1216 09:39:53.299498 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.300089 kubelet[2715]: E1216 09:39:53.299970 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.300089 kubelet[2715]: W1216 09:39:53.299980 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.300089 kubelet[2715]: E1216 09:39:53.300055 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.300402 kubelet[2715]: E1216 09:39:53.300368 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.300455 kubelet[2715]: W1216 09:39:53.300424 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.300455 kubelet[2715]: E1216 09:39:53.300448 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.301102 kubelet[2715]: E1216 09:39:53.301077 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.301102 kubelet[2715]: W1216 09:39:53.301094 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.301197 kubelet[2715]: E1216 09:39:53.301138 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:53.301799 kubelet[2715]: E1216 09:39:53.301579 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:53.301799 kubelet[2715]: W1216 09:39:53.301593 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:53.301799 kubelet[2715]: E1216 09:39:53.301605 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.144287 kubelet[2715]: E1216 09:39:54.144228 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:39:54.295614 kubelet[2715]: E1216 09:39:54.295571 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.295614 kubelet[2715]: W1216 09:39:54.295601 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.296225 kubelet[2715]: E1216 09:39:54.295624 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.296225 kubelet[2715]: E1216 09:39:54.295864 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.296225 kubelet[2715]: W1216 09:39:54.295872 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.296225 kubelet[2715]: E1216 09:39:54.295882 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.296225 kubelet[2715]: E1216 09:39:54.296045 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.296225 kubelet[2715]: W1216 09:39:54.296053 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.296225 kubelet[2715]: E1216 09:39:54.296061 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.296225 kubelet[2715]: E1216 09:39:54.296223 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.296225 kubelet[2715]: W1216 09:39:54.296231 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.296563 kubelet[2715]: E1216 09:39:54.296239 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.296563 kubelet[2715]: E1216 09:39:54.296428 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.296563 kubelet[2715]: W1216 09:39:54.296435 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.296563 kubelet[2715]: E1216 09:39:54.296443 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.296676 kubelet[2715]: E1216 09:39:54.296607 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.296676 kubelet[2715]: W1216 09:39:54.296614 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.296676 kubelet[2715]: E1216 09:39:54.296622 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.296860 kubelet[2715]: E1216 09:39:54.296839 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.296860 kubelet[2715]: W1216 09:39:54.296852 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.296925 kubelet[2715]: E1216 09:39:54.296864 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.297063 kubelet[2715]: E1216 09:39:54.297044 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.297112 kubelet[2715]: W1216 09:39:54.297066 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.297112 kubelet[2715]: E1216 09:39:54.297076 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.297274 kubelet[2715]: E1216 09:39:54.297253 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.297274 kubelet[2715]: W1216 09:39:54.297267 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.297349 kubelet[2715]: E1216 09:39:54.297276 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.297592 kubelet[2715]: E1216 09:39:54.297549 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.297592 kubelet[2715]: W1216 09:39:54.297584 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.297655 kubelet[2715]: E1216 09:39:54.297594 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.297873 kubelet[2715]: E1216 09:39:54.297783 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.297873 kubelet[2715]: W1216 09:39:54.297794 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.297873 kubelet[2715]: E1216 09:39:54.297803 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.298007 kubelet[2715]: E1216 09:39:54.297988 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.298007 kubelet[2715]: W1216 09:39:54.297998 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.298007 kubelet[2715]: E1216 09:39:54.298007 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.298201 kubelet[2715]: E1216 09:39:54.298182 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.298201 kubelet[2715]: W1216 09:39:54.298195 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.298259 kubelet[2715]: E1216 09:39:54.298204 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.298407 kubelet[2715]: E1216 09:39:54.298376 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.298407 kubelet[2715]: W1216 09:39:54.298391 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.298407 kubelet[2715]: E1216 09:39:54.298407 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.298589 kubelet[2715]: E1216 09:39:54.298568 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.298589 kubelet[2715]: W1216 09:39:54.298582 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.298650 kubelet[2715]: E1216 09:39:54.298591 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.299814 kubelet[2715]: E1216 09:39:54.299797 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.299814 kubelet[2715]: W1216 09:39:54.299808 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.299908 kubelet[2715]: E1216 09:39:54.299816 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.299997 kubelet[2715]: E1216 09:39:54.299985 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.299997 kubelet[2715]: W1216 09:39:54.299994 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.300073 kubelet[2715]: E1216 09:39:54.300005 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.300205 kubelet[2715]: E1216 09:39:54.300187 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.300205 kubelet[2715]: W1216 09:39:54.300201 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.300255 kubelet[2715]: E1216 09:39:54.300215 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.300455 kubelet[2715]: E1216 09:39:54.300437 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.300455 kubelet[2715]: W1216 09:39:54.300450 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.300509 kubelet[2715]: E1216 09:39:54.300465 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.300689 kubelet[2715]: E1216 09:39:54.300664 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.300689 kubelet[2715]: W1216 09:39:54.300676 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.300689 kubelet[2715]: E1216 09:39:54.300690 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.300951 kubelet[2715]: E1216 09:39:54.300933 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.300951 kubelet[2715]: W1216 09:39:54.300946 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.301010 kubelet[2715]: E1216 09:39:54.300960 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.301847 kubelet[2715]: E1216 09:39:54.301133 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.301847 kubelet[2715]: W1216 09:39:54.301139 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.301847 kubelet[2715]: E1216 09:39:54.301148 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.301847 kubelet[2715]: E1216 09:39:54.301320 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.301847 kubelet[2715]: W1216 09:39:54.301329 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.301847 kubelet[2715]: E1216 09:39:54.301343 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.301847 kubelet[2715]: E1216 09:39:54.301524 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.301847 kubelet[2715]: W1216 09:39:54.301531 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.301847 kubelet[2715]: E1216 09:39:54.301553 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.301847 kubelet[2715]: E1216 09:39:54.301692 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.302071 kubelet[2715]: W1216 09:39:54.301699 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.302071 kubelet[2715]: E1216 09:39:54.301721 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.302169 kubelet[2715]: E1216 09:39:54.302151 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.302169 kubelet[2715]: W1216 09:39:54.302162 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.302299 kubelet[2715]: E1216 09:39:54.302176 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.302373 kubelet[2715]: E1216 09:39:54.302354 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.302373 kubelet[2715]: W1216 09:39:54.302367 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.302431 kubelet[2715]: E1216 09:39:54.302387 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.302618 kubelet[2715]: E1216 09:39:54.302601 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.302618 kubelet[2715]: W1216 09:39:54.302613 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.302680 kubelet[2715]: E1216 09:39:54.302632 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.303032 kubelet[2715]: E1216 09:39:54.303013 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.303032 kubelet[2715]: W1216 09:39:54.303025 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.303095 kubelet[2715]: E1216 09:39:54.303039 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.303227 kubelet[2715]: E1216 09:39:54.303211 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.303227 kubelet[2715]: W1216 09:39:54.303220 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.303412 kubelet[2715]: E1216 09:39:54.303231 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.303444 kubelet[2715]: E1216 09:39:54.303423 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.303444 kubelet[2715]: W1216 09:39:54.303431 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.303482 kubelet[2715]: E1216 09:39:54.303442 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.303920 kubelet[2715]: E1216 09:39:54.303901 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.303920 kubelet[2715]: W1216 09:39:54.303913 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.303984 kubelet[2715]: E1216 09:39:54.303933 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.304137 kubelet[2715]: E1216 09:39:54.304117 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:39:54.304137 kubelet[2715]: W1216 09:39:54.304129 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:39:54.304203 kubelet[2715]: E1216 09:39:54.304138 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:39:54.561875 containerd[1495]: time="2024-12-16T09:39:54.561786329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:54.563056 containerd[1495]: time="2024-12-16T09:39:54.562832990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 16 09:39:54.565072 containerd[1495]: time="2024-12-16T09:39:54.564114640Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:54.566769 containerd[1495]: time="2024-12-16T09:39:54.566172153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:54.566769 containerd[1495]: time="2024-12-16T09:39:54.566643574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.691426934s" Dec 16 09:39:54.566769 containerd[1495]: time="2024-12-16T09:39:54.566670906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 16 09:39:54.568906 containerd[1495]: time="2024-12-16T09:39:54.568886651Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 09:39:54.603251 containerd[1495]: time="2024-12-16T09:39:54.603203150Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef\"" Dec 16 09:39:54.604189 containerd[1495]: time="2024-12-16T09:39:54.603850527Z" level=info msg="StartContainer for \"d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef\"" Dec 16 09:39:54.661071 systemd[1]: Started cri-containerd-d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef.scope - libcontainer container d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef. Dec 16 09:39:54.704783 containerd[1495]: time="2024-12-16T09:39:54.704506089Z" level=info msg="StartContainer for \"d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef\" returns successfully" Dec 16 09:39:54.718013 systemd[1]: cri-containerd-d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef.scope: Deactivated successfully. Dec 16 09:39:54.752055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef-rootfs.mount: Deactivated successfully. Dec 16 09:39:54.851158 containerd[1495]: time="2024-12-16T09:39:54.813776338Z" level=info msg="shim disconnected" id=d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef namespace=k8s.io Dec 16 09:39:54.851158 containerd[1495]: time="2024-12-16T09:39:54.850299025Z" level=warning msg="cleaning up after shim disconnected" id=d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef namespace=k8s.io Dec 16 09:39:54.851158 containerd[1495]: time="2024-12-16T09:39:54.850312339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:39:55.244872 containerd[1495]: time="2024-12-16T09:39:55.243121472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 16 09:39:56.142973 kubelet[2715]: E1216 09:39:56.142313 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:39:58.142008 kubelet[2715]: E1216 09:39:58.141282 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:00.141062 kubelet[2715]: E1216 09:40:00.140985 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:00.227753 containerd[1495]: time="2024-12-16T09:40:00.227680388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:00.254878 containerd[1495]: time="2024-12-16T09:40:00.254798942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 16 09:40:00.256046 containerd[1495]: time="2024-12-16T09:40:00.255986057Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:00.258774 containerd[1495]: time="2024-12-16T09:40:00.258684369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:00.259916 containerd[1495]: time="2024-12-16T09:40:00.259196290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.016029872s" Dec 16 09:40:00.259916 containerd[1495]: time="2024-12-16T09:40:00.259230536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 16 09:40:00.262702 containerd[1495]: time="2024-12-16T09:40:00.262676149Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 09:40:00.282931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894839179.mount: Deactivated successfully. Dec 16 09:40:00.290984 containerd[1495]: time="2024-12-16T09:40:00.290878762Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1\"" Dec 16 09:40:00.291816 containerd[1495]: time="2024-12-16T09:40:00.291769288Z" level=info msg="StartContainer for \"fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1\"" Dec 16 09:40:00.361937 systemd[1]: Started cri-containerd-fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1.scope - libcontainer container fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1. Dec 16 09:40:00.411974 containerd[1495]: time="2024-12-16T09:40:00.411787783Z" level=info msg="StartContainer for \"fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1\" returns successfully" Dec 16 09:40:00.970566 systemd[1]: cri-containerd-fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1.scope: Deactivated successfully. Dec 16 09:40:01.000200 kubelet[2715]: I1216 09:40:01.000173 2715 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 16 09:40:01.019638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1-rootfs.mount: Deactivated successfully. Dec 16 09:40:01.033249 containerd[1495]: time="2024-12-16T09:40:01.033147912Z" level=info msg="shim disconnected" id=fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1 namespace=k8s.io Dec 16 09:40:01.033249 containerd[1495]: time="2024-12-16T09:40:01.033239528Z" level=warning msg="cleaning up after shim disconnected" id=fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1 namespace=k8s.io Dec 16 09:40:01.033249 containerd[1495]: time="2024-12-16T09:40:01.033248294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:01.066232 kubelet[2715]: W1216 09:40:01.063706 2715 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-2-1-4-1bd0c0376a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-4-1bd0c0376a' and this object Dec 16 09:40:01.066232 kubelet[2715]: E1216 09:40:01.063784 2715 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-2-1-4-1bd0c0376a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-2-1-4-1bd0c0376a' and this object" logger="UnhandledError" Dec 16 09:40:01.080198 systemd[1]: Created slice kubepods-besteffort-pod9e2e45b5_d5ad_4866_a437_2038f6559801.slice - libcontainer container kubepods-besteffort-pod9e2e45b5_d5ad_4866_a437_2038f6559801.slice. Dec 16 09:40:01.091561 systemd[1]: Created slice kubepods-burstable-poda579e978_3399_4d3d_9c17_650bcb6672c6.slice - libcontainer container kubepods-burstable-poda579e978_3399_4d3d_9c17_650bcb6672c6.slice. Dec 16 09:40:01.105405 systemd[1]: Created slice kubepods-burstable-pod2b1e1fb6_74af_4597_8a80_d045a1736cc2.slice - libcontainer container kubepods-burstable-pod2b1e1fb6_74af_4597_8a80_d045a1736cc2.slice. Dec 16 09:40:01.116907 systemd[1]: Created slice kubepods-besteffort-pode8eba98c_945c_4206_8249_239323f54417.slice - libcontainer container kubepods-besteffort-pode8eba98c_945c_4206_8249_239323f54417.slice. Dec 16 09:40:01.127398 systemd[1]: Created slice kubepods-besteffort-podccf88f28_2dde_42dd_bdda_69127b53bf8a.slice - libcontainer container kubepods-besteffort-podccf88f28_2dde_42dd_bdda_69127b53bf8a.slice. Dec 16 09:40:01.156655 kubelet[2715]: I1216 09:40:01.156598 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a579e978-3399-4d3d-9c17-650bcb6672c6-config-volume\") pod \"coredns-6f6b679f8f-kmszk\" (UID: \"a579e978-3399-4d3d-9c17-650bcb6672c6\") " pod="kube-system/coredns-6f6b679f8f-kmszk" Dec 16 09:40:01.157590 kubelet[2715]: I1216 09:40:01.157399 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e2e45b5-d5ad-4866-a437-2038f6559801-tigera-ca-bundle\") pod \"calico-kube-controllers-74864695c7-klzq2\" (UID: \"9e2e45b5-d5ad-4866-a437-2038f6559801\") " pod="calico-system/calico-kube-controllers-74864695c7-klzq2" Dec 16 09:40:01.157590 kubelet[2715]: I1216 09:40:01.157467 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9tz\" (UniqueName: \"kubernetes.io/projected/9e2e45b5-d5ad-4866-a437-2038f6559801-kube-api-access-2l9tz\") pod \"calico-kube-controllers-74864695c7-klzq2\" (UID: \"9e2e45b5-d5ad-4866-a437-2038f6559801\") " pod="calico-system/calico-kube-controllers-74864695c7-klzq2" Dec 16 09:40:01.157590 kubelet[2715]: I1216 09:40:01.157532 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2srj\" (UniqueName: \"kubernetes.io/projected/a579e978-3399-4d3d-9c17-650bcb6672c6-kube-api-access-b2srj\") pod \"coredns-6f6b679f8f-kmszk\" (UID: \"a579e978-3399-4d3d-9c17-650bcb6672c6\") " pod="kube-system/coredns-6f6b679f8f-kmszk" Dec 16 09:40:01.260439 kubelet[2715]: I1216 09:40:01.258447 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2rq\" (UniqueName: \"kubernetes.io/projected/2b1e1fb6-74af-4597-8a80-d045a1736cc2-kube-api-access-7g2rq\") pod \"coredns-6f6b679f8f-vspcs\" (UID: \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\") " pod="kube-system/coredns-6f6b679f8f-vspcs" Dec 16 09:40:01.260439 kubelet[2715]: I1216 09:40:01.258551 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ccf88f28-2dde-42dd-bdda-69127b53bf8a-calico-apiserver-certs\") pod \"calico-apiserver-58b65f65c5-7tp4b\" (UID: \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\") " pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" Dec 16 09:40:01.260439 kubelet[2715]: I1216 09:40:01.258579 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgm6\" (UniqueName: \"kubernetes.io/projected/ccf88f28-2dde-42dd-bdda-69127b53bf8a-kube-api-access-fqgm6\") pod \"calico-apiserver-58b65f65c5-7tp4b\" (UID: \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\") " pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" Dec 16 09:40:01.260439 kubelet[2715]: I1216 09:40:01.258638 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8eba98c-945c-4206-8249-239323f54417-calico-apiserver-certs\") pod \"calico-apiserver-58b65f65c5-dvxqf\" (UID: \"e8eba98c-945c-4206-8249-239323f54417\") " pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" Dec 16 09:40:01.260439 kubelet[2715]: I1216 09:40:01.258682 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b1e1fb6-74af-4597-8a80-d045a1736cc2-config-volume\") pod \"coredns-6f6b679f8f-vspcs\" (UID: \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\") " pod="kube-system/coredns-6f6b679f8f-vspcs" Dec 16 09:40:01.260716 kubelet[2715]: I1216 09:40:01.258707 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktcst\" (UniqueName: \"kubernetes.io/projected/e8eba98c-945c-4206-8249-239323f54417-kube-api-access-ktcst\") pod \"calico-apiserver-58b65f65c5-dvxqf\" (UID: \"e8eba98c-945c-4206-8249-239323f54417\") " pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" Dec 16 09:40:01.293191 containerd[1495]: time="2024-12-16T09:40:01.292812873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 16 09:40:01.397500 containerd[1495]: time="2024-12-16T09:40:01.397289048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74864695c7-klzq2,Uid:9e2e45b5-d5ad-4866-a437-2038f6559801,Namespace:calico-system,Attempt:0,}" Dec 16 09:40:01.424609 containerd[1495]: time="2024-12-16T09:40:01.424210164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-dvxqf,Uid:e8eba98c-945c-4206-8249-239323f54417,Namespace:calico-apiserver,Attempt:0,}" Dec 16 09:40:01.442547 containerd[1495]: time="2024-12-16T09:40:01.442501043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-7tp4b,Uid:ccf88f28-2dde-42dd-bdda-69127b53bf8a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 09:40:01.627146 containerd[1495]: time="2024-12-16T09:40:01.627083939Z" level=error msg="Failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.633260 containerd[1495]: time="2024-12-16T09:40:01.632331810Z" level=error msg="encountered an error cleaning up failed sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.633260 containerd[1495]: time="2024-12-16T09:40:01.632409449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-dvxqf,Uid:e8eba98c-945c-4206-8249-239323f54417,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.639599 containerd[1495]: time="2024-12-16T09:40:01.639543147Z" level=error msg="Failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.640291 containerd[1495]: time="2024-12-16T09:40:01.639966118Z" level=error msg="encountered an error cleaning up failed sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.640291 containerd[1495]: time="2024-12-16T09:40:01.640014551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74864695c7-klzq2,Uid:9e2e45b5-d5ad-4866-a437-2038f6559801,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.641186 kubelet[2715]: E1216 09:40:01.640007 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.641186 kubelet[2715]: E1216 09:40:01.640127 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" Dec 16 09:40:01.641186 kubelet[2715]: E1216 09:40:01.640224 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.641186 kubelet[2715]: E1216 09:40:01.640287 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" Dec 16 09:40:01.643769 containerd[1495]: time="2024-12-16T09:40:01.643199197Z" level=error msg="Failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.644021 kubelet[2715]: E1216 09:40:01.643965 2715 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" Dec 16 09:40:01.644197 kubelet[2715]: E1216 09:40:01.644163 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58b65f65c5-dvxqf_calico-apiserver(e8eba98c-945c-4206-8249-239323f54417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58b65f65c5-dvxqf_calico-apiserver(e8eba98c-945c-4206-8249-239323f54417)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" podUID="e8eba98c-945c-4206-8249-239323f54417" Dec 16 09:40:01.644335 kubelet[2715]: E1216 09:40:01.644259 2715 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" Dec 16 09:40:01.644449 kubelet[2715]: E1216 09:40:01.644422 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74864695c7-klzq2_calico-system(9e2e45b5-d5ad-4866-a437-2038f6559801)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74864695c7-klzq2_calico-system(9e2e45b5-d5ad-4866-a437-2038f6559801)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" podUID="9e2e45b5-d5ad-4866-a437-2038f6559801" Dec 16 09:40:01.644835 containerd[1495]: time="2024-12-16T09:40:01.644709372Z" level=error msg="encountered an error cleaning up failed sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.644835 containerd[1495]: time="2024-12-16T09:40:01.644772262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-7tp4b,Uid:ccf88f28-2dde-42dd-bdda-69127b53bf8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.645047 kubelet[2715]: E1216 09:40:01.644961 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:01.645334 kubelet[2715]: E1216 09:40:01.645261 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" Dec 16 09:40:01.645334 kubelet[2715]: E1216 09:40:01.645305 2715 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" Dec 16 09:40:01.645512 kubelet[2715]: E1216 09:40:01.645362 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58b65f65c5-7tp4b_calico-apiserver(ccf88f28-2dde-42dd-bdda-69127b53bf8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58b65f65c5-7tp4b_calico-apiserver(ccf88f28-2dde-42dd-bdda-69127b53bf8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" podUID="ccf88f28-2dde-42dd-bdda-69127b53bf8a" Dec 16 09:40:02.154224 systemd[1]: Created slice kubepods-besteffort-podb184f5aa_f13a_4907_82b2_11f9a166985b.slice - libcontainer container kubepods-besteffort-podb184f5aa_f13a_4907_82b2_11f9a166985b.slice. Dec 16 09:40:02.159693 containerd[1495]: time="2024-12-16T09:40:02.159591609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6lhq,Uid:b184f5aa-f13a-4907-82b2-11f9a166985b,Namespace:calico-system,Attempt:0,}" Dec 16 09:40:02.283153 containerd[1495]: time="2024-12-16T09:40:02.282607178Z" level=error msg="Failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.283153 containerd[1495]: time="2024-12-16T09:40:02.283032113Z" level=error msg="encountered an error cleaning up failed sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.283153 containerd[1495]: time="2024-12-16T09:40:02.283082309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6lhq,Uid:b184f5aa-f13a-4907-82b2-11f9a166985b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.284708 kubelet[2715]: I1216 09:40:02.282631 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:02.284708 kubelet[2715]: E1216 09:40:02.283542 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.286009 kubelet[2715]: E1216 09:40:02.285076 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6lhq" Dec 16 09:40:02.286009 kubelet[2715]: E1216 09:40:02.285105 2715 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6lhq" Dec 16 09:40:02.286009 kubelet[2715]: E1216 09:40:02.285278 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s6lhq_calico-system(b184f5aa-f13a-4907-82b2-11f9a166985b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s6lhq_calico-system(b184f5aa-f13a-4907-82b2-11f9a166985b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:02.287749 kubelet[2715]: I1216 09:40:02.287696 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:02.296636 containerd[1495]: time="2024-12-16T09:40:02.296192977Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:40:02.298288 containerd[1495]: time="2024-12-16T09:40:02.297644361Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:40:02.303526 containerd[1495]: time="2024-12-16T09:40:02.302277207Z" level=info msg="Ensure that sandbox e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8 in task-service has been cleanup successfully" Dec 16 09:40:02.303526 containerd[1495]: time="2024-12-16T09:40:02.302319778Z" level=info msg="Ensure that sandbox 4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436 in task-service has been cleanup successfully" Dec 16 09:40:02.305917 containerd[1495]: time="2024-12-16T09:40:02.305867984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kmszk,Uid:a579e978-3399-4d3d-9c17-650bcb6672c6,Namespace:kube-system,Attempt:0,}" Dec 16 09:40:02.309379 kubelet[2715]: I1216 09:40:02.309323 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:40:02.311174 containerd[1495]: time="2024-12-16T09:40:02.311147751Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:40:02.312000 containerd[1495]: time="2024-12-16T09:40:02.311639013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vspcs,Uid:2b1e1fb6-74af-4597-8a80-d045a1736cc2,Namespace:kube-system,Attempt:0,}" Dec 16 09:40:02.312718 containerd[1495]: time="2024-12-16T09:40:02.312642317Z" level=info msg="Ensure that sandbox 52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab in task-service has been cleanup successfully" Dec 16 09:40:02.399300 containerd[1495]: time="2024-12-16T09:40:02.399230337Z" level=error msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" failed" error="failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.399671 kubelet[2715]: E1216 09:40:02.399467 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:02.399671 kubelet[2715]: E1216 09:40:02.399542 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436"} Dec 16 09:40:02.399671 kubelet[2715]: E1216 09:40:02.399602 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:02.399671 kubelet[2715]: E1216 09:40:02.399626 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" podUID="ccf88f28-2dde-42dd-bdda-69127b53bf8a" Dec 16 09:40:02.400703 kubelet[2715]: E1216 09:40:02.400280 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:02.400703 kubelet[2715]: E1216 09:40:02.400337 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8"} Dec 16 09:40:02.400703 kubelet[2715]: E1216 09:40:02.400359 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:02.400703 kubelet[2715]: E1216 09:40:02.400378 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" podUID="e8eba98c-945c-4206-8249-239323f54417" Dec 16 09:40:02.400943 containerd[1495]: time="2024-12-16T09:40:02.400061822Z" level=error msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" failed" error="failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.409824 containerd[1495]: time="2024-12-16T09:40:02.408480309Z" level=error msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" failed" error="failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.409973 kubelet[2715]: E1216 09:40:02.408814 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:40:02.409973 kubelet[2715]: E1216 09:40:02.408865 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab"} Dec 16 09:40:02.409973 kubelet[2715]: E1216 09:40:02.408901 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:02.409973 kubelet[2715]: E1216 09:40:02.408925 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" podUID="9e2e45b5-d5ad-4866-a437-2038f6559801" Dec 16 09:40:02.461747 containerd[1495]: time="2024-12-16T09:40:02.461583173Z" level=error msg="Failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.462114 containerd[1495]: time="2024-12-16T09:40:02.462092418Z" level=error msg="encountered an error cleaning up failed sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.462214 containerd[1495]: time="2024-12-16T09:40:02.462191779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vspcs,Uid:2b1e1fb6-74af-4597-8a80-d045a1736cc2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.462764 kubelet[2715]: E1216 09:40:02.462461 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.462764 kubelet[2715]: E1216 09:40:02.462534 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vspcs" Dec 16 09:40:02.462764 kubelet[2715]: E1216 09:40:02.462552 2715 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vspcs" Dec 16 09:40:02.462877 kubelet[2715]: E1216 09:40:02.462592 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vspcs_kube-system(2b1e1fb6-74af-4597-8a80-d045a1736cc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vspcs_kube-system(2b1e1fb6-74af-4597-8a80-d045a1736cc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vspcs" podUID="2b1e1fb6-74af-4597-8a80-d045a1736cc2" Dec 16 09:40:02.467107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3-shm.mount: Deactivated successfully. Dec 16 09:40:02.474753 containerd[1495]: time="2024-12-16T09:40:02.474658652Z" level=error msg="Failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.475182 containerd[1495]: time="2024-12-16T09:40:02.475114415Z" level=error msg="encountered an error cleaning up failed sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.475234 containerd[1495]: time="2024-12-16T09:40:02.475195951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kmszk,Uid:a579e978-3399-4d3d-9c17-650bcb6672c6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.476404 kubelet[2715]: E1216 09:40:02.475453 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:02.476404 kubelet[2715]: E1216 09:40:02.475531 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kmszk" Dec 16 09:40:02.476404 kubelet[2715]: E1216 09:40:02.475551 2715 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kmszk" Dec 16 09:40:02.476533 kubelet[2715]: E1216 09:40:02.475595 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kmszk_kube-system(a579e978-3399-4d3d-9c17-650bcb6672c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kmszk_kube-system(a579e978-3399-4d3d-9c17-650bcb6672c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kmszk" podUID="a579e978-3399-4d3d-9c17-650bcb6672c6" Dec 16 09:40:03.287007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7-shm.mount: Deactivated successfully. Dec 16 09:40:03.313657 kubelet[2715]: I1216 09:40:03.312501 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:40:03.316183 containerd[1495]: time="2024-12-16T09:40:03.314619870Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:40:03.316183 containerd[1495]: time="2024-12-16T09:40:03.314983397Z" level=info msg="Ensure that sandbox f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664 in task-service has been cleanup successfully" Dec 16 09:40:03.318425 kubelet[2715]: I1216 09:40:03.318401 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:03.320325 containerd[1495]: time="2024-12-16T09:40:03.320294799Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:40:03.321464 containerd[1495]: time="2024-12-16T09:40:03.320961538Z" level=info msg="Ensure that sandbox dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3 in task-service has been cleanup successfully" Dec 16 09:40:03.326213 kubelet[2715]: I1216 09:40:03.326179 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:03.328429 containerd[1495]: time="2024-12-16T09:40:03.327787424Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:40:03.333505 containerd[1495]: time="2024-12-16T09:40:03.333435553Z" level=info msg="Ensure that sandbox 035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7 in task-service has been cleanup successfully" Dec 16 09:40:03.403038 containerd[1495]: time="2024-12-16T09:40:03.402976333Z" level=error msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" failed" error="failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:03.406408 containerd[1495]: time="2024-12-16T09:40:03.406231489Z" level=error msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" failed" error="failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:03.406505 kubelet[2715]: E1216 09:40:03.403226 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:03.406505 kubelet[2715]: E1216 09:40:03.403284 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3"} Dec 16 09:40:03.406505 kubelet[2715]: E1216 09:40:03.403315 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:03.406505 kubelet[2715]: E1216 09:40:03.403337 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vspcs" podUID="2b1e1fb6-74af-4597-8a80-d045a1736cc2" Dec 16 09:40:03.410303 kubelet[2715]: E1216 09:40:03.406528 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:03.410303 kubelet[2715]: E1216 09:40:03.406587 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7"} Dec 16 09:40:03.410303 kubelet[2715]: E1216 09:40:03.406617 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:03.410303 kubelet[2715]: E1216 09:40:03.406639 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kmszk" podUID="a579e978-3399-4d3d-9c17-650bcb6672c6" Dec 16 09:40:03.411712 containerd[1495]: time="2024-12-16T09:40:03.411644545Z" level=error msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" failed" error="failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:03.411940 kubelet[2715]: E1216 09:40:03.411892 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:40:03.411990 kubelet[2715]: E1216 09:40:03.411957 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664"} Dec 16 09:40:03.412029 kubelet[2715]: E1216 09:40:03.412012 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:03.412087 kubelet[2715]: E1216 09:40:03.412047 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:08.971305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840736703.mount: Deactivated successfully. Dec 16 09:40:09.138063 containerd[1495]: time="2024-12-16T09:40:09.091788140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 16 09:40:09.140327 containerd[1495]: time="2024-12-16T09:40:09.139927288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:09.147900 containerd[1495]: time="2024-12-16T09:40:09.147828107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.847633323s" Dec 16 09:40:09.148069 containerd[1495]: time="2024-12-16T09:40:09.148054312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 16 09:40:09.161717 containerd[1495]: time="2024-12-16T09:40:09.161668545Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:09.162329 containerd[1495]: time="2024-12-16T09:40:09.162278628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:09.204201 containerd[1495]: time="2024-12-16T09:40:09.204132131Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 09:40:09.278681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560474514.mount: Deactivated successfully. Dec 16 09:40:09.305305 containerd[1495]: time="2024-12-16T09:40:09.303427977Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389\"" Dec 16 09:40:09.306393 containerd[1495]: time="2024-12-16T09:40:09.305872617Z" level=info msg="StartContainer for \"14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389\"" Dec 16 09:40:09.483990 systemd[1]: Started cri-containerd-14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389.scope - libcontainer container 14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389. Dec 16 09:40:09.542587 containerd[1495]: time="2024-12-16T09:40:09.542392766Z" level=info msg="StartContainer for \"14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389\" returns successfully" Dec 16 09:40:09.629489 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 09:40:09.630842 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 09:40:09.665800 systemd[1]: cri-containerd-14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389.scope: Deactivated successfully. Dec 16 09:40:09.767711 containerd[1495]: time="2024-12-16T09:40:09.767577793Z" level=info msg="shim disconnected" id=14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389 namespace=k8s.io Dec 16 09:40:09.767711 containerd[1495]: time="2024-12-16T09:40:09.767666394Z" level=warning msg="cleaning up after shim disconnected" id=14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389 namespace=k8s.io Dec 16 09:40:09.767711 containerd[1495]: time="2024-12-16T09:40:09.767678006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:10.390438 kubelet[2715]: I1216 09:40:10.390371 2715 scope.go:117] "RemoveContainer" containerID="14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389" Dec 16 09:40:10.397516 containerd[1495]: time="2024-12-16T09:40:10.397342400Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Dec 16 09:40:10.421969 containerd[1495]: time="2024-12-16T09:40:10.421904225Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2\"" Dec 16 09:40:10.422557 containerd[1495]: time="2024-12-16T09:40:10.422513516Z" level=info msg="StartContainer for \"f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2\"" Dec 16 09:40:10.471985 systemd[1]: Started cri-containerd-f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2.scope - libcontainer container f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2. Dec 16 09:40:10.511310 containerd[1495]: time="2024-12-16T09:40:10.511234151Z" level=info msg="StartContainer for \"f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2\" returns successfully" Dec 16 09:40:10.605225 systemd[1]: cri-containerd-f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2.scope: Deactivated successfully. Dec 16 09:40:10.632822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2-rootfs.mount: Deactivated successfully. Dec 16 09:40:10.641109 containerd[1495]: time="2024-12-16T09:40:10.640898973Z" level=info msg="shim disconnected" id=f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2 namespace=k8s.io Dec 16 09:40:10.641109 containerd[1495]: time="2024-12-16T09:40:10.641001521Z" level=warning msg="cleaning up after shim disconnected" id=f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2 namespace=k8s.io Dec 16 09:40:10.641109 containerd[1495]: time="2024-12-16T09:40:10.641012101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:11.392576 kubelet[2715]: I1216 09:40:11.392489 2715 scope.go:117] "RemoveContainer" containerID="14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389" Dec 16 09:40:11.393471 kubelet[2715]: I1216 09:40:11.392980 2715 scope.go:117] "RemoveContainer" containerID="f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2" Dec 16 09:40:11.402902 kubelet[2715]: E1216 09:40:11.402012 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-w9ksm_calico-system(0efe44f3-b6a3-424b-bdbe-df0a131921ef)\"" pod="calico-system/calico-node-w9ksm" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" Dec 16 09:40:11.430896 containerd[1495]: time="2024-12-16T09:40:11.430811460Z" level=info msg="RemoveContainer for \"14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389\"" Dec 16 09:40:11.441705 containerd[1495]: time="2024-12-16T09:40:11.441618017Z" level=info msg="RemoveContainer for \"14ad7d8b3d1fa93e1ea0f6525f3017025bceb0e275f79b4661c99afe361b7389\" returns successfully" Dec 16 09:40:12.384930 kubelet[2715]: I1216 09:40:12.384873 2715 scope.go:117] "RemoveContainer" containerID="f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2" Dec 16 09:40:12.385371 kubelet[2715]: E1216 09:40:12.385115 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-w9ksm_calico-system(0efe44f3-b6a3-424b-bdbe-df0a131921ef)\"" pod="calico-system/calico-node-w9ksm" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" Dec 16 09:40:14.143064 containerd[1495]: time="2024-12-16T09:40:14.142934017Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:40:14.181239 containerd[1495]: time="2024-12-16T09:40:14.181142975Z" level=error msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" failed" error="failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:14.181544 kubelet[2715]: E1216 09:40:14.181387 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:40:14.181544 kubelet[2715]: E1216 09:40:14.181442 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab"} Dec 16 09:40:14.181544 kubelet[2715]: E1216 09:40:14.181474 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:14.181544 kubelet[2715]: E1216 09:40:14.181500 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" podUID="9e2e45b5-d5ad-4866-a437-2038f6559801" Dec 16 09:40:15.142302 containerd[1495]: time="2024-12-16T09:40:15.142013814Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:40:15.201536 containerd[1495]: time="2024-12-16T09:40:15.201366404Z" level=error msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" failed" error="failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:15.202463 kubelet[2715]: E1216 09:40:15.201894 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:40:15.202463 kubelet[2715]: E1216 09:40:15.201982 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664"} Dec 16 09:40:15.202463 kubelet[2715]: E1216 09:40:15.202056 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:15.202463 kubelet[2715]: E1216 09:40:15.202112 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:16.151714 containerd[1495]: time="2024-12-16T09:40:16.146255168Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:40:16.196643 containerd[1495]: time="2024-12-16T09:40:16.196597690Z" level=error msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" failed" error="failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:16.197111 kubelet[2715]: E1216 09:40:16.197079 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:16.197268 kubelet[2715]: E1216 09:40:16.197247 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7"} Dec 16 09:40:16.197343 kubelet[2715]: E1216 09:40:16.197330 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:16.197465 kubelet[2715]: E1216 09:40:16.197447 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kmszk" podUID="a579e978-3399-4d3d-9c17-650bcb6672c6" Dec 16 09:40:17.143634 containerd[1495]: time="2024-12-16T09:40:17.142788214Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:40:17.143634 containerd[1495]: time="2024-12-16T09:40:17.142882697Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:40:17.146685 containerd[1495]: time="2024-12-16T09:40:17.146586380Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:40:17.222775 containerd[1495]: time="2024-12-16T09:40:17.222542832Z" level=error msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" failed" error="failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:17.223535 kubelet[2715]: E1216 09:40:17.223012 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:17.223535 kubelet[2715]: E1216 09:40:17.223122 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3"} Dec 16 09:40:17.223535 kubelet[2715]: E1216 09:40:17.223182 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:17.223535 kubelet[2715]: E1216 09:40:17.223212 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vspcs" podUID="2b1e1fb6-74af-4597-8a80-d045a1736cc2" Dec 16 09:40:17.236891 containerd[1495]: time="2024-12-16T09:40:17.236304615Z" level=error msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" failed" error="failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:17.237119 kubelet[2715]: E1216 09:40:17.236551 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:17.237119 kubelet[2715]: E1216 09:40:17.236622 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436"} Dec 16 09:40:17.237119 kubelet[2715]: E1216 09:40:17.236782 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:17.237119 kubelet[2715]: E1216 09:40:17.236846 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" podUID="ccf88f28-2dde-42dd-bdda-69127b53bf8a" Dec 16 09:40:17.243517 containerd[1495]: time="2024-12-16T09:40:17.243443858Z" level=error msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" failed" error="failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:17.243854 kubelet[2715]: E1216 09:40:17.243707 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:17.243944 kubelet[2715]: E1216 09:40:17.243875 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8"} Dec 16 09:40:17.243944 kubelet[2715]: E1216 09:40:17.243916 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:17.244067 kubelet[2715]: E1216 09:40:17.243945 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" podUID="e8eba98c-945c-4206-8249-239323f54417" Dec 16 09:40:19.503172 kubelet[2715]: I1216 09:40:19.503103 2715 scope.go:117] "RemoveContainer" containerID="f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2" Dec 16 09:40:19.504300 kubelet[2715]: E1216 09:40:19.503276 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-w9ksm_calico-system(0efe44f3-b6a3-424b-bdbe-df0a131921ef)\"" pod="calico-system/calico-node-w9ksm" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" Dec 16 09:40:27.142424 containerd[1495]: time="2024-12-16T09:40:27.142354496Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:40:27.191671 containerd[1495]: time="2024-12-16T09:40:27.191429045Z" level=error msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" failed" error="failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:27.192042 kubelet[2715]: E1216 09:40:27.191957 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:40:27.192540 kubelet[2715]: E1216 09:40:27.192051 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab"} Dec 16 09:40:27.192540 kubelet[2715]: E1216 09:40:27.192134 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:27.192540 kubelet[2715]: E1216 09:40:27.192183 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" podUID="9e2e45b5-d5ad-4866-a437-2038f6559801" Dec 16 09:40:28.144809 containerd[1495]: time="2024-12-16T09:40:28.144713608Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:40:28.147398 containerd[1495]: time="2024-12-16T09:40:28.146843290Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:40:28.211300 kubelet[2715]: I1216 09:40:28.211078 2715 scope.go:117] "RemoveContainer" containerID="f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2" Dec 16 09:40:28.219404 containerd[1495]: time="2024-12-16T09:40:28.219350773Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Dec 16 09:40:28.223018 containerd[1495]: time="2024-12-16T09:40:28.222922137Z" level=error msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" failed" error="failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:28.223559 kubelet[2715]: E1216 09:40:28.223494 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:40:28.223559 kubelet[2715]: E1216 09:40:28.223549 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664"} Dec 16 09:40:28.223710 kubelet[2715]: E1216 09:40:28.223583 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:28.223710 kubelet[2715]: E1216 09:40:28.223607 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:28.229048 containerd[1495]: time="2024-12-16T09:40:28.228853519Z" level=error msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" failed" error="failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:28.229501 kubelet[2715]: E1216 09:40:28.229326 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:28.229501 kubelet[2715]: E1216 09:40:28.229401 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7"} Dec 16 09:40:28.229501 kubelet[2715]: E1216 09:40:28.229462 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:28.229889 kubelet[2715]: E1216 09:40:28.229513 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kmszk" podUID="a579e978-3399-4d3d-9c17-650bcb6672c6" Dec 16 09:40:28.242544 containerd[1495]: time="2024-12-16T09:40:28.242231757Z" level=info msg="CreateContainer within sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\"" Dec 16 09:40:28.244692 containerd[1495]: time="2024-12-16T09:40:28.244655215Z" level=info msg="StartContainer for \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\"" Dec 16 09:40:28.247059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078132411.mount: Deactivated successfully. Dec 16 09:40:28.292012 systemd[1]: run-containerd-runc-k8s.io-504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76-runc.MIFk96.mount: Deactivated successfully. Dec 16 09:40:28.303251 systemd[1]: Started cri-containerd-504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76.scope - libcontainer container 504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76. Dec 16 09:40:28.348129 containerd[1495]: time="2024-12-16T09:40:28.348054809Z" level=info msg="StartContainer for \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\" returns successfully" Dec 16 09:40:28.453142 systemd[1]: cri-containerd-504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76.scope: Deactivated successfully. Dec 16 09:40:28.482306 containerd[1495]: time="2024-12-16T09:40:28.482122850Z" level=error msg="ExecSync for \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\" failed" error="failed to exec in container: failed to create exec \"7ed41e5a3c4d5767719c2ff38d6f5c22efef4ee09de074ed0e1bf6cd72ad28a9\": cannot exec in a stopped state: unknown" Dec 16 09:40:28.482609 kubelet[2715]: E1216 09:40:28.482389 2715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"7ed41e5a3c4d5767719c2ff38d6f5c22efef4ee09de074ed0e1bf6cd72ad28a9\": cannot exec in a stopped state: unknown" containerID="504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Dec 16 09:40:28.503597 containerd[1495]: time="2024-12-16T09:40:28.503351364Z" level=info msg="shim disconnected" id=504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76 namespace=k8s.io Dec 16 09:40:28.503597 containerd[1495]: time="2024-12-16T09:40:28.503431349Z" level=warning msg="cleaning up after shim disconnected" id=504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76 namespace=k8s.io Dec 16 09:40:28.503597 containerd[1495]: time="2024-12-16T09:40:28.503440546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:28.526046 containerd[1495]: time="2024-12-16T09:40:28.511228351Z" level=error msg="ExecSync for \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: not found" Dec 16 09:40:28.526662 containerd[1495]: time="2024-12-16T09:40:28.520763049Z" level=warning msg="cleanup warnings time=\"2024-12-16T09:40:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 16 09:40:28.526710 kubelet[2715]: E1216 09:40:28.526300 2715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: not found" containerID="504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Dec 16 09:40:28.527826 containerd[1495]: time="2024-12-16T09:40:28.527760348Z" level=error msg="ExecSync for \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76 not found: not found" Dec 16 09:40:28.527963 kubelet[2715]: E1216 09:40:28.527896 2715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76 not found: not found" containerID="504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Dec 16 09:40:29.238248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76-rootfs.mount: Deactivated successfully. Dec 16 09:40:29.426630 kubelet[2715]: I1216 09:40:29.426516 2715 scope.go:117] "RemoveContainer" containerID="f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2" Dec 16 09:40:29.426630 kubelet[2715]: I1216 09:40:29.427240 2715 scope.go:117] "RemoveContainer" containerID="504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76" Dec 16 09:40:29.432604 containerd[1495]: time="2024-12-16T09:40:29.431602364Z" level=info msg="RemoveContainer for \"f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2\"" Dec 16 09:40:29.433541 kubelet[2715]: E1216 09:40:29.427451 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-w9ksm_calico-system(0efe44f3-b6a3-424b-bdbe-df0a131921ef)\"" pod="calico-system/calico-node-w9ksm" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" Dec 16 09:40:29.439631 containerd[1495]: time="2024-12-16T09:40:29.439451791Z" level=info msg="RemoveContainer for \"f244dc0d6af1e9ca439f71f8690d79180bddda74144cf9b39e5ee7c19f60c2f2\" returns successfully" Dec 16 09:40:30.143975 containerd[1495]: time="2024-12-16T09:40:30.143814972Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:40:30.206036 containerd[1495]: time="2024-12-16T09:40:30.205931652Z" level=error msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" failed" error="failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:30.206400 kubelet[2715]: E1216 09:40:30.206324 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:30.206498 kubelet[2715]: E1216 09:40:30.206418 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8"} Dec 16 09:40:30.206498 kubelet[2715]: E1216 09:40:30.206472 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:30.206683 kubelet[2715]: E1216 09:40:30.206513 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" podUID="e8eba98c-945c-4206-8249-239323f54417" Dec 16 09:40:30.432478 kubelet[2715]: I1216 09:40:30.431940 2715 scope.go:117] "RemoveContainer" containerID="504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76" Dec 16 09:40:30.432478 kubelet[2715]: E1216 09:40:30.432096 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-w9ksm_calico-system(0efe44f3-b6a3-424b-bdbe-df0a131921ef)\"" pod="calico-system/calico-node-w9ksm" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" Dec 16 09:40:31.143101 containerd[1495]: time="2024-12-16T09:40:31.141949341Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:40:31.143101 containerd[1495]: time="2024-12-16T09:40:31.142105493Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:40:31.187901 containerd[1495]: time="2024-12-16T09:40:31.187800863Z" level=error msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" failed" error="failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:31.188256 kubelet[2715]: E1216 09:40:31.188169 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:31.188320 kubelet[2715]: E1216 09:40:31.188275 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3"} Dec 16 09:40:31.188349 kubelet[2715]: E1216 09:40:31.188332 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:31.188418 kubelet[2715]: E1216 09:40:31.188375 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vspcs" podUID="2b1e1fb6-74af-4597-8a80-d045a1736cc2" Dec 16 09:40:31.215690 containerd[1495]: time="2024-12-16T09:40:31.215593915Z" level=error msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" failed" error="failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:31.216525 kubelet[2715]: E1216 09:40:31.215870 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:31.216525 kubelet[2715]: E1216 09:40:31.215932 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436"} Dec 16 09:40:31.216525 kubelet[2715]: E1216 09:40:31.215982 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:31.216525 kubelet[2715]: E1216 09:40:31.216022 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" podUID="ccf88f28-2dde-42dd-bdda-69127b53bf8a" Dec 16 09:40:41.141756 containerd[1495]: time="2024-12-16T09:40:41.141342099Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:40:41.198772 containerd[1495]: time="2024-12-16T09:40:41.198598679Z" level=error msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" failed" error="failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:41.199223 kubelet[2715]: E1216 09:40:41.199039 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:40:41.199223 kubelet[2715]: E1216 09:40:41.199124 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664"} Dec 16 09:40:41.199223 kubelet[2715]: E1216 09:40:41.199178 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:41.199223 kubelet[2715]: E1216 09:40:41.199219 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:42.144445 containerd[1495]: time="2024-12-16T09:40:42.143889104Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:40:42.200624 containerd[1495]: time="2024-12-16T09:40:42.200538681Z" level=error msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" failed" error="failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:42.200965 kubelet[2715]: E1216 09:40:42.200837 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:40:42.200965 kubelet[2715]: E1216 09:40:42.200904 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab"} Dec 16 09:40:42.200965 kubelet[2715]: E1216 09:40:42.200949 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:42.202042 kubelet[2715]: E1216 09:40:42.200992 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" podUID="9e2e45b5-d5ad-4866-a437-2038f6559801" Dec 16 09:40:43.142156 containerd[1495]: time="2024-12-16T09:40:43.141519478Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:40:43.142412 containerd[1495]: time="2024-12-16T09:40:43.141714245Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:40:43.179051 containerd[1495]: time="2024-12-16T09:40:43.178936975Z" level=error msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" failed" error="failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:43.179570 kubelet[2715]: E1216 09:40:43.179239 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:43.179570 kubelet[2715]: E1216 09:40:43.179304 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3"} Dec 16 09:40:43.179570 kubelet[2715]: E1216 09:40:43.179336 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:43.179570 kubelet[2715]: E1216 09:40:43.179359 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b1e1fb6-74af-4597-8a80-d045a1736cc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vspcs" podUID="2b1e1fb6-74af-4597-8a80-d045a1736cc2" Dec 16 09:40:43.182123 containerd[1495]: time="2024-12-16T09:40:43.182083612Z" level=error msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" failed" error="failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:43.182312 kubelet[2715]: E1216 09:40:43.182260 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:43.182355 kubelet[2715]: E1216 09:40:43.182299 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7"} Dec 16 09:40:43.182399 kubelet[2715]: E1216 09:40:43.182347 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:43.182399 kubelet[2715]: E1216 09:40:43.182374 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a579e978-3399-4d3d-9c17-650bcb6672c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kmszk" podUID="a579e978-3399-4d3d-9c17-650bcb6672c6" Dec 16 09:40:45.142724 kubelet[2715]: I1216 09:40:45.142497 2715 scope.go:117] "RemoveContainer" containerID="504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76" Dec 16 09:40:45.144597 kubelet[2715]: E1216 09:40:45.143865 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-w9ksm_calico-system(0efe44f3-b6a3-424b-bdbe-df0a131921ef)\"" pod="calico-system/calico-node-w9ksm" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" Dec 16 09:40:45.144680 containerd[1495]: time="2024-12-16T09:40:45.143023507Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:40:45.144680 containerd[1495]: time="2024-12-16T09:40:45.144192099Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:40:45.212440 containerd[1495]: time="2024-12-16T09:40:45.212332759Z" level=error msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" failed" error="failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:45.213480 kubelet[2715]: E1216 09:40:45.213275 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:45.213480 kubelet[2715]: E1216 09:40:45.213353 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436"} Dec 16 09:40:45.213480 kubelet[2715]: E1216 09:40:45.213399 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:45.213480 kubelet[2715]: E1216 09:40:45.213432 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccf88f28-2dde-42dd-bdda-69127b53bf8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" podUID="ccf88f28-2dde-42dd-bdda-69127b53bf8a" Dec 16 09:40:45.224663 containerd[1495]: time="2024-12-16T09:40:45.224570124Z" level=error msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" failed" error="failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:45.225293 kubelet[2715]: E1216 09:40:45.224933 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:45.225293 kubelet[2715]: E1216 09:40:45.224998 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8"} Dec 16 09:40:45.225293 kubelet[2715]: E1216 09:40:45.225041 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:45.225293 kubelet[2715]: E1216 09:40:45.225083 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8eba98c-945c-4206-8249-239323f54417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" podUID="e8eba98c-945c-4206-8249-239323f54417" Dec 16 09:40:50.717329 containerd[1495]: time="2024-12-16T09:40:50.717269400Z" level=info msg="StopContainer for \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\" with timeout 300 (s)" Dec 16 09:40:50.718512 containerd[1495]: time="2024-12-16T09:40:50.718456888Z" level=info msg="Stop container \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\" with signal terminated" Dec 16 09:40:50.770300 systemd[1]: cri-containerd-28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc.scope: Deactivated successfully. Dec 16 09:40:50.814694 containerd[1495]: time="2024-12-16T09:40:50.813438506Z" level=info msg="StopPodSandbox for \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\"" Dec 16 09:40:50.814694 containerd[1495]: time="2024-12-16T09:40:50.813494575Z" level=info msg="Container to stop \"fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:40:50.814694 containerd[1495]: time="2024-12-16T09:40:50.813510485Z" level=info msg="Container to stop \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:40:50.814694 containerd[1495]: time="2024-12-16T09:40:50.813523230Z" level=info msg="Container to stop \"d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:40:50.822505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81-shm.mount: Deactivated successfully. Dec 16 09:40:50.831626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc-rootfs.mount: Deactivated successfully. Dec 16 09:40:50.841206 systemd[1]: cri-containerd-fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81.scope: Deactivated successfully. Dec 16 09:40:50.860406 containerd[1495]: time="2024-12-16T09:40:50.860254470Z" level=info msg="shim disconnected" id=28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc namespace=k8s.io Dec 16 09:40:50.861290 containerd[1495]: time="2024-12-16T09:40:50.861067784Z" level=warning msg="cleaning up after shim disconnected" id=28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc namespace=k8s.io Dec 16 09:40:50.861290 containerd[1495]: time="2024-12-16T09:40:50.861084456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:50.887518 containerd[1495]: time="2024-12-16T09:40:50.887467311Z" level=warning msg="cleanup warnings time=\"2024-12-16T09:40:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 16 09:40:50.889507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81-rootfs.mount: Deactivated successfully. Dec 16 09:40:50.893005 containerd[1495]: time="2024-12-16T09:40:50.892937037Z" level=info msg="shim disconnected" id=fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81 namespace=k8s.io Dec 16 09:40:50.893005 containerd[1495]: time="2024-12-16T09:40:50.893000571Z" level=warning msg="cleaning up after shim disconnected" id=fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81 namespace=k8s.io Dec 16 09:40:50.893187 containerd[1495]: time="2024-12-16T09:40:50.893011922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:50.894261 containerd[1495]: time="2024-12-16T09:40:50.894105799Z" level=info msg="StopContainer for \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\" returns successfully" Dec 16 09:40:50.894884 containerd[1495]: time="2024-12-16T09:40:50.894702324Z" level=info msg="StopPodSandbox for \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\"" Dec 16 09:40:50.894884 containerd[1495]: time="2024-12-16T09:40:50.894748123Z" level=info msg="Container to stop \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:40:50.899380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf-shm.mount: Deactivated successfully. Dec 16 09:40:50.912334 systemd[1]: cri-containerd-4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf.scope: Deactivated successfully. Dec 16 09:40:50.935853 containerd[1495]: time="2024-12-16T09:40:50.935566948Z" level=info msg="TearDown network for sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" successfully" Dec 16 09:40:50.935853 containerd[1495]: time="2024-12-16T09:40:50.935609070Z" level=info msg="StopPodSandbox for \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" returns successfully" Dec 16 09:40:50.965088 containerd[1495]: time="2024-12-16T09:40:50.962752617Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:40:50.980099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf-rootfs.mount: Deactivated successfully. Dec 16 09:40:50.993951 containerd[1495]: time="2024-12-16T09:40:50.993865286Z" level=info msg="shim disconnected" id=4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf namespace=k8s.io Dec 16 09:40:50.993951 containerd[1495]: time="2024-12-16T09:40:50.993937817Z" level=warning msg="cleaning up after shim disconnected" id=4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf namespace=k8s.io Dec 16 09:40:50.993951 containerd[1495]: time="2024-12-16T09:40:50.993950470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:51.033758 containerd[1495]: time="2024-12-16T09:40:51.033197750Z" level=warning msg="cleanup warnings time=\"2024-12-16T09:40:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 16 09:40:51.038214 kubelet[2715]: I1216 09:40:51.037787 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-lib-modules\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.038214 kubelet[2715]: I1216 09:40:51.037856 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-bin-dir\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040098 kubelet[2715]: I1216 09:40:51.038472 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0efe44f3-b6a3-424b-bdbe-df0a131921ef-node-certs\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040098 kubelet[2715]: I1216 09:40:51.038505 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-net-dir\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040098 kubelet[2715]: I1216 09:40:51.038562 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgnvq\" (UniqueName: \"kubernetes.io/projected/0efe44f3-b6a3-424b-bdbe-df0a131921ef-kube-api-access-mgnvq\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040098 kubelet[2715]: I1216 09:40:51.038582 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-xtables-lock\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040098 kubelet[2715]: I1216 09:40:51.038595 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-lib-calico\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040098 kubelet[2715]: I1216 09:40:51.038786 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-policysync\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040243 containerd[1495]: time="2024-12-16T09:40:51.039330802Z" level=info msg="TearDown network for sandbox \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" successfully" Dec 16 09:40:51.040243 containerd[1495]: time="2024-12-16T09:40:51.039350881Z" level=info msg="StopPodSandbox for \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" returns successfully" Dec 16 09:40:51.040299 kubelet[2715]: I1216 09:40:51.038814 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-run-calico\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040299 kubelet[2715]: I1216 09:40:51.038830 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-flexvol-driver-host\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040299 kubelet[2715]: I1216 09:40:51.038843 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-log-dir\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.040299 kubelet[2715]: I1216 09:40:51.039458 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0efe44f3-b6a3-424b-bdbe-df0a131921ef-tigera-ca-bundle\") pod \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\" (UID: \"0efe44f3-b6a3-424b-bdbe-df0a131921ef\") " Dec 16 09:40:51.054964 kubelet[2715]: I1216 09:40:51.050967 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.058459 kubelet[2715]: I1216 09:40:51.056608 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.058542 kubelet[2715]: E1216 09:40:51.058510 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="calico-node" Dec 16 09:40:51.058590 kubelet[2715]: E1216 09:40:51.058563 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="flexvol-driver" Dec 16 09:40:51.058590 kubelet[2715]: E1216 09:40:51.058574 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="install-cni" Dec 16 09:40:51.058590 kubelet[2715]: E1216 09:40:51.058582 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="calico-node" Dec 16 09:40:51.058590 kubelet[2715]: E1216 09:40:51.058590 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="calico-node" Dec 16 09:40:51.058721 kubelet[2715]: I1216 09:40:51.058657 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="calico-node" Dec 16 09:40:51.058721 kubelet[2715]: I1216 09:40:51.058673 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="calico-node" Dec 16 09:40:51.059002 kubelet[2715]: I1216 09:40:51.058747 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" containerName="calico-node" Dec 16 09:40:51.064256 kubelet[2715]: I1216 09:40:51.062983 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.074360 kubelet[2715]: I1216 09:40:51.074307 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0efe44f3-b6a3-424b-bdbe-df0a131921ef-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 09:40:51.074763 kubelet[2715]: I1216 09:40:51.074708 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-policysync" (OuterVolumeSpecName: "policysync") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.076750 kubelet[2715]: I1216 09:40:51.074869 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.076750 kubelet[2715]: I1216 09:40:51.074891 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.076750 kubelet[2715]: I1216 09:40:51.074907 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.076750 kubelet[2715]: I1216 09:40:51.074948 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.076750 kubelet[2715]: I1216 09:40:51.074965 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:40:51.077916 containerd[1495]: time="2024-12-16T09:40:51.077873608Z" level=error msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" failed" error="failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:51.078285 kubelet[2715]: I1216 09:40:51.078243 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efe44f3-b6a3-424b-bdbe-df0a131921ef-node-certs" (OuterVolumeSpecName: "node-certs") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 09:40:51.079390 systemd[1]: Created slice kubepods-besteffort-pod39aa2cc9_5842_4f44_bf0d_80587bc01330.slice - libcontainer container kubepods-besteffort-pod39aa2cc9_5842_4f44_bf0d_80587bc01330.slice. Dec 16 09:40:51.080734 kubelet[2715]: E1216 09:40:51.079714 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:40:51.080734 kubelet[2715]: E1216 09:40:51.079821 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab"} Dec 16 09:40:51.080734 kubelet[2715]: E1216 09:40:51.079875 2715 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" logger="UnhandledError" Dec 16 09:40:51.081600 kubelet[2715]: E1216 09:40:51.081113 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e2e45b5-d5ad-4866-a437-2038f6559801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74864695c7-klzq2" podUID="9e2e45b5-d5ad-4866-a437-2038f6559801" Dec 16 09:40:51.083258 kubelet[2715]: I1216 09:40:51.083108 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efe44f3-b6a3-424b-bdbe-df0a131921ef-kube-api-access-mgnvq" (OuterVolumeSpecName: "kube-api-access-mgnvq") pod "0efe44f3-b6a3-424b-bdbe-df0a131921ef" (UID: "0efe44f3-b6a3-424b-bdbe-df0a131921ef"). InnerVolumeSpecName "kube-api-access-mgnvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 09:40:51.141285 kubelet[2715]: I1216 09:40:51.141233 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/180f1fe1-23f0-4551-b070-27f6a4652cc4-typha-certs\") pod \"180f1fe1-23f0-4551-b070-27f6a4652cc4\" (UID: \"180f1fe1-23f0-4551-b070-27f6a4652cc4\") " Dec 16 09:40:51.141816 kubelet[2715]: I1216 09:40:51.141790 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcls5\" (UniqueName: \"kubernetes.io/projected/180f1fe1-23f0-4551-b070-27f6a4652cc4-kube-api-access-xcls5\") pod \"180f1fe1-23f0-4551-b070-27f6a4652cc4\" (UID: \"180f1fe1-23f0-4551-b070-27f6a4652cc4\") " Dec 16 09:40:51.142571 kubelet[2715]: I1216 09:40:51.142555 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/180f1fe1-23f0-4551-b070-27f6a4652cc4-tigera-ca-bundle\") pod \"180f1fe1-23f0-4551-b070-27f6a4652cc4\" (UID: \"180f1fe1-23f0-4551-b070-27f6a4652cc4\") " Dec 16 09:40:51.142689 kubelet[2715]: I1216 09:40:51.142675 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-cni-bin-dir\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.147230 kubelet[2715]: I1216 09:40:51.147205 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-lib-modules\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149849 kubelet[2715]: I1216 09:40:51.147343 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-xtables-lock\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149849 kubelet[2715]: I1216 09:40:51.147363 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-policysync\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149849 kubelet[2715]: I1216 09:40:51.147385 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39aa2cc9-5842-4f44-bf0d-80587bc01330-tigera-ca-bundle\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149849 kubelet[2715]: I1216 09:40:51.147409 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-var-lib-calico\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149849 kubelet[2715]: I1216 09:40:51.147428 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-flexvol-driver-host\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149995 kubelet[2715]: I1216 09:40:51.147445 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnfw\" (UniqueName: \"kubernetes.io/projected/39aa2cc9-5842-4f44-bf0d-80587bc01330-kube-api-access-qrnfw\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149995 kubelet[2715]: I1216 09:40:51.147470 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39aa2cc9-5842-4f44-bf0d-80587bc01330-node-certs\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149995 kubelet[2715]: I1216 09:40:51.147489 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-cni-net-dir\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149995 kubelet[2715]: I1216 09:40:51.147506 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-var-run-calico\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149995 kubelet[2715]: I1216 09:40:51.147520 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39aa2cc9-5842-4f44-bf0d-80587bc01330-cni-log-dir\") pod \"calico-node-8fm2x\" (UID: \"39aa2cc9-5842-4f44-bf0d-80587bc01330\") " pod="calico-system/calico-node-8fm2x" Dec 16 09:40:51.149995 kubelet[2715]: I1216 09:40:51.147550 2715 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-lib-modules\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147562 2715 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-bin-dir\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147571 2715 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0efe44f3-b6a3-424b-bdbe-df0a131921ef-node-certs\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147579 2715 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-net-dir\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147587 2715 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mgnvq\" (UniqueName: \"kubernetes.io/projected/0efe44f3-b6a3-424b-bdbe-df0a131921ef-kube-api-access-mgnvq\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147595 2715 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-xtables-lock\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147603 2715 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-lib-calico\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147613 2715 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-policysync\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150163 kubelet[2715]: I1216 09:40:51.147621 2715 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-var-run-calico\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150355 kubelet[2715]: I1216 09:40:51.147630 2715 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-flexvol-driver-host\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150355 kubelet[2715]: I1216 09:40:51.147639 2715 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0efe44f3-b6a3-424b-bdbe-df0a131921ef-cni-log-dir\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.150355 kubelet[2715]: I1216 09:40:51.147648 2715 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0efe44f3-b6a3-424b-bdbe-df0a131921ef-tigera-ca-bundle\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.155831 kubelet[2715]: I1216 09:40:51.155675 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/180f1fe1-23f0-4551-b070-27f6a4652cc4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "180f1fe1-23f0-4551-b070-27f6a4652cc4" (UID: "180f1fe1-23f0-4551-b070-27f6a4652cc4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 09:40:51.156048 kubelet[2715]: I1216 09:40:51.155996 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180f1fe1-23f0-4551-b070-27f6a4652cc4-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "180f1fe1-23f0-4551-b070-27f6a4652cc4" (UID: "180f1fe1-23f0-4551-b070-27f6a4652cc4"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 09:40:51.157895 kubelet[2715]: I1216 09:40:51.157868 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180f1fe1-23f0-4551-b070-27f6a4652cc4-kube-api-access-xcls5" (OuterVolumeSpecName: "kube-api-access-xcls5") pod "180f1fe1-23f0-4551-b070-27f6a4652cc4" (UID: "180f1fe1-23f0-4551-b070-27f6a4652cc4"). InnerVolumeSpecName "kube-api-access-xcls5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 09:40:51.248611 kubelet[2715]: I1216 09:40:51.248453 2715 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/180f1fe1-23f0-4551-b070-27f6a4652cc4-tigera-ca-bundle\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.248611 kubelet[2715]: I1216 09:40:51.248513 2715 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xcls5\" (UniqueName: \"kubernetes.io/projected/180f1fe1-23f0-4551-b070-27f6a4652cc4-kube-api-access-xcls5\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.248611 kubelet[2715]: I1216 09:40:51.248530 2715 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/180f1fe1-23f0-4551-b070-27f6a4652cc4-typha-certs\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:40:51.388020 containerd[1495]: time="2024-12-16T09:40:51.387911378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fm2x,Uid:39aa2cc9-5842-4f44-bf0d-80587bc01330,Namespace:calico-system,Attempt:0,}" Dec 16 09:40:51.421167 containerd[1495]: time="2024-12-16T09:40:51.420788438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:51.421167 containerd[1495]: time="2024-12-16T09:40:51.420915544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:51.421167 containerd[1495]: time="2024-12-16T09:40:51.420938809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:51.421883 containerd[1495]: time="2024-12-16T09:40:51.421082748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:51.450213 systemd[1]: Started cri-containerd-db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c.scope - libcontainer container db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c. Dec 16 09:40:51.502966 kubelet[2715]: I1216 09:40:51.500993 2715 scope.go:117] "RemoveContainer" containerID="28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc" Dec 16 09:40:51.507033 containerd[1495]: time="2024-12-16T09:40:51.506995707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fm2x,Uid:39aa2cc9-5842-4f44-bf0d-80587bc01330,Namespace:calico-system,Attempt:0,} returns sandbox id \"db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c\"" Dec 16 09:40:51.509611 systemd[1]: Removed slice kubepods-besteffort-pod180f1fe1_23f0_4551_b070_27f6a4652cc4.slice - libcontainer container kubepods-besteffort-pod180f1fe1_23f0_4551_b070_27f6a4652cc4.slice. Dec 16 09:40:51.512783 containerd[1495]: time="2024-12-16T09:40:51.512534889Z" level=info msg="RemoveContainer for \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\"" Dec 16 09:40:51.515097 containerd[1495]: time="2024-12-16T09:40:51.515061289Z" level=info msg="CreateContainer within sandbox \"db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 09:40:51.528145 containerd[1495]: time="2024-12-16T09:40:51.528059439Z" level=info msg="RemoveContainer for \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\" returns successfully" Dec 16 09:40:51.528695 kubelet[2715]: I1216 09:40:51.528485 2715 scope.go:117] "RemoveContainer" containerID="28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc" Dec 16 09:40:51.529741 containerd[1495]: time="2024-12-16T09:40:51.529668263Z" level=error msg="ContainerStatus for \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\": not found" Dec 16 09:40:51.530079 kubelet[2715]: E1216 09:40:51.530010 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\": not found" containerID="28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc" Dec 16 09:40:51.530214 kubelet[2715]: I1216 09:40:51.530050 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc"} err="failed to get container status \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\": rpc error: code = NotFound desc = an error occurred when try to find container \"28322d65c93a493c820c0886dadb21ffa45007ca8a7c988faa5097e9012e3ccc\": not found" Dec 16 09:40:51.530214 kubelet[2715]: I1216 09:40:51.530151 2715 scope.go:117] "RemoveContainer" containerID="504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76" Dec 16 09:40:51.535766 containerd[1495]: time="2024-12-16T09:40:51.535270558Z" level=info msg="RemoveContainer for \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\"" Dec 16 09:40:51.536798 systemd[1]: Removed slice kubepods-besteffort-pod0efe44f3_b6a3_424b_bdbe_df0a131921ef.slice - libcontainer container kubepods-besteffort-pod0efe44f3_b6a3_424b_bdbe_df0a131921ef.slice. Dec 16 09:40:51.545299 containerd[1495]: time="2024-12-16T09:40:51.545247069Z" level=info msg="RemoveContainer for \"504ace4e38a02a7e2e3af221072af8e57468cbc2529f1831250736e86c0aeb76\" returns successfully" Dec 16 09:40:51.545791 kubelet[2715]: I1216 09:40:51.545671 2715 scope.go:117] "RemoveContainer" containerID="fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1" Dec 16 09:40:51.549101 containerd[1495]: time="2024-12-16T09:40:51.548916833Z" level=info msg="RemoveContainer for \"fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1\"" Dec 16 09:40:51.556655 containerd[1495]: time="2024-12-16T09:40:51.556520921Z" level=info msg="RemoveContainer for \"fb7d8f7ec6c442f00c950fb0ce295aed65f2e9bd249afd14da4c802c2f1033e1\" returns successfully" Dec 16 09:40:51.556805 kubelet[2715]: I1216 09:40:51.556755 2715 scope.go:117] "RemoveContainer" containerID="d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef" Dec 16 09:40:51.560021 containerd[1495]: time="2024-12-16T09:40:51.559910232Z" level=info msg="CreateContainer within sandbox \"db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc\"" Dec 16 09:40:51.562237 containerd[1495]: time="2024-12-16T09:40:51.562213360Z" level=info msg="RemoveContainer for \"d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef\"" Dec 16 09:40:51.563329 containerd[1495]: time="2024-12-16T09:40:51.562810526Z" level=info msg="StartContainer for \"c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc\"" Dec 16 09:40:51.568332 containerd[1495]: time="2024-12-16T09:40:51.568287689Z" level=info msg="RemoveContainer for \"d39bbfd00968949a3ee1f018c3ba9642497208a1766b6ec41bda751bbc8726ef\" returns successfully" Dec 16 09:40:51.594931 systemd[1]: Started cri-containerd-c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc.scope - libcontainer container c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc. Dec 16 09:40:51.634169 containerd[1495]: time="2024-12-16T09:40:51.633992020Z" level=info msg="StartContainer for \"c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc\" returns successfully" Dec 16 09:40:51.684681 systemd[1]: cri-containerd-c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc.scope: Deactivated successfully. Dec 16 09:40:51.712252 containerd[1495]: time="2024-12-16T09:40:51.712005288Z" level=info msg="shim disconnected" id=c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc namespace=k8s.io Dec 16 09:40:51.712252 containerd[1495]: time="2024-12-16T09:40:51.712071476Z" level=warning msg="cleaning up after shim disconnected" id=c959088d5b4e6c36974417045a076ecc70576a5be7e11f74df1fc3ce2cf7becc namespace=k8s.io Dec 16 09:40:51.712252 containerd[1495]: time="2024-12-16T09:40:51.712083109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:51.832317 systemd[1]: var-lib-kubelet-pods-0efe44f3\x2db6a3\x2d424b\x2dbdbe\x2ddf0a131921ef-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 16 09:40:51.832689 systemd[1]: var-lib-kubelet-pods-180f1fe1\x2d23f0\x2d4551\x2db070\x2d27f6a4652cc4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Dec 16 09:40:51.832946 systemd[1]: var-lib-kubelet-pods-0efe44f3\x2db6a3\x2d424b\x2dbdbe\x2ddf0a131921ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgnvq.mount: Deactivated successfully. Dec 16 09:40:51.833193 systemd[1]: var-lib-kubelet-pods-0efe44f3\x2db6a3\x2d424b\x2dbdbe\x2ddf0a131921ef-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 16 09:40:51.833379 systemd[1]: var-lib-kubelet-pods-180f1fe1\x2d23f0\x2d4551\x2db070\x2d27f6a4652cc4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxcls5.mount: Deactivated successfully. Dec 16 09:40:51.833539 systemd[1]: var-lib-kubelet-pods-180f1fe1\x2d23f0\x2d4551\x2db070\x2d27f6a4652cc4-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Dec 16 09:40:52.076449 kubelet[2715]: E1216 09:40:52.076378 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="180f1fe1-23f0-4551-b070-27f6a4652cc4" containerName="calico-typha" Dec 16 09:40:52.076449 kubelet[2715]: I1216 09:40:52.076446 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="180f1fe1-23f0-4551-b070-27f6a4652cc4" containerName="calico-typha" Dec 16 09:40:52.092179 systemd[1]: Created slice kubepods-besteffort-podb3b059f2_b4c4_4d9c_88a7_ca549dbf6ff2.slice - libcontainer container kubepods-besteffort-podb3b059f2_b4c4_4d9c_88a7_ca549dbf6ff2.slice. Dec 16 09:40:52.147152 kubelet[2715]: I1216 09:40:52.146830 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efe44f3-b6a3-424b-bdbe-df0a131921ef" path="/var/lib/kubelet/pods/0efe44f3-b6a3-424b-bdbe-df0a131921ef/volumes" Dec 16 09:40:52.149470 kubelet[2715]: I1216 09:40:52.149433 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180f1fe1-23f0-4551-b070-27f6a4652cc4" path="/var/lib/kubelet/pods/180f1fe1-23f0-4551-b070-27f6a4652cc4/volumes" Dec 16 09:40:52.155850 kubelet[2715]: I1216 09:40:52.155693 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2-tigera-ca-bundle\") pod \"calico-typha-784949988f-cf4qh\" (UID: \"b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2\") " pod="calico-system/calico-typha-784949988f-cf4qh" Dec 16 09:40:52.155850 kubelet[2715]: I1216 09:40:52.155810 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2-typha-certs\") pod \"calico-typha-784949988f-cf4qh\" (UID: \"b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2\") " pod="calico-system/calico-typha-784949988f-cf4qh" Dec 16 09:40:52.156028 kubelet[2715]: I1216 09:40:52.155905 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlcll\" (UniqueName: \"kubernetes.io/projected/b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2-kube-api-access-vlcll\") pod \"calico-typha-784949988f-cf4qh\" (UID: \"b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2\") " pod="calico-system/calico-typha-784949988f-cf4qh" Dec 16 09:40:52.396276 containerd[1495]: time="2024-12-16T09:40:52.395931022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784949988f-cf4qh,Uid:b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2,Namespace:calico-system,Attempt:0,}" Dec 16 09:40:52.444385 containerd[1495]: time="2024-12-16T09:40:52.444099417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:52.444385 containerd[1495]: time="2024-12-16T09:40:52.444162580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:52.444385 containerd[1495]: time="2024-12-16T09:40:52.444178070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:52.444704 containerd[1495]: time="2024-12-16T09:40:52.444447461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:52.467996 systemd[1]: Started cri-containerd-be0ba26672f878b137628dc2ab76f2c122bcb60f4002bd74fe3b9f18852163b2.scope - libcontainer container be0ba26672f878b137628dc2ab76f2c122bcb60f4002bd74fe3b9f18852163b2. Dec 16 09:40:52.519159 containerd[1495]: time="2024-12-16T09:40:52.519022143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784949988f-cf4qh,Uid:b3b059f2-b4c4-4d9c-88a7-ca549dbf6ff2,Namespace:calico-system,Attempt:0,} returns sandbox id \"be0ba26672f878b137628dc2ab76f2c122bcb60f4002bd74fe3b9f18852163b2\"" Dec 16 09:40:52.532325 containerd[1495]: time="2024-12-16T09:40:52.531883441Z" level=info msg="CreateContainer within sandbox \"be0ba26672f878b137628dc2ab76f2c122bcb60f4002bd74fe3b9f18852163b2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 09:40:52.553293 containerd[1495]: time="2024-12-16T09:40:52.553215856Z" level=info msg="CreateContainer within sandbox \"db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 09:40:52.568778 containerd[1495]: time="2024-12-16T09:40:52.568623343Z" level=info msg="CreateContainer within sandbox \"be0ba26672f878b137628dc2ab76f2c122bcb60f4002bd74fe3b9f18852163b2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6cd39bbf739b6f3bc87c25870b24488a91009efcc2bfafab78033a7cc69c0296\"" Dec 16 09:40:52.570508 containerd[1495]: time="2024-12-16T09:40:52.569519267Z" level=info msg="StartContainer for \"6cd39bbf739b6f3bc87c25870b24488a91009efcc2bfafab78033a7cc69c0296\"" Dec 16 09:40:52.579545 containerd[1495]: time="2024-12-16T09:40:52.579500679Z" level=info msg="CreateContainer within sandbox \"db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26\"" Dec 16 09:40:52.582321 containerd[1495]: time="2024-12-16T09:40:52.582285019Z" level=info msg="StartContainer for \"6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26\"" Dec 16 09:40:52.601903 systemd[1]: Started cri-containerd-6cd39bbf739b6f3bc87c25870b24488a91009efcc2bfafab78033a7cc69c0296.scope - libcontainer container 6cd39bbf739b6f3bc87c25870b24488a91009efcc2bfafab78033a7cc69c0296. Dec 16 09:40:52.615880 systemd[1]: Started cri-containerd-6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26.scope - libcontainer container 6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26. Dec 16 09:40:52.675172 containerd[1495]: time="2024-12-16T09:40:52.674102080Z" level=info msg="StartContainer for \"6cd39bbf739b6f3bc87c25870b24488a91009efcc2bfafab78033a7cc69c0296\" returns successfully" Dec 16 09:40:52.682026 containerd[1495]: time="2024-12-16T09:40:52.681965623Z" level=info msg="StartContainer for \"6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26\" returns successfully" Dec 16 09:40:53.142757 containerd[1495]: time="2024-12-16T09:40:53.142492901Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:40:53.175989 containerd[1495]: time="2024-12-16T09:40:53.175862203Z" level=error msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" failed" error="failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:40:53.176426 kubelet[2715]: E1216 09:40:53.176267 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:40:53.176426 kubelet[2715]: E1216 09:40:53.176323 2715 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664"} Dec 16 09:40:53.176426 kubelet[2715]: E1216 09:40:53.176355 2715 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:40:53.176426 kubelet[2715]: E1216 09:40:53.176379 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b184f5aa-f13a-4907-82b2-11f9a166985b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6lhq" podUID="b184f5aa-f13a-4907-82b2-11f9a166985b" Dec 16 09:40:53.437074 systemd[1]: cri-containerd-6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26.scope: Deactivated successfully. Dec 16 09:40:53.463325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26-rootfs.mount: Deactivated successfully. Dec 16 09:40:53.469213 containerd[1495]: time="2024-12-16T09:40:53.469068688Z" level=info msg="shim disconnected" id=6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26 namespace=k8s.io Dec 16 09:40:53.469213 containerd[1495]: time="2024-12-16T09:40:53.469140347Z" level=warning msg="cleaning up after shim disconnected" id=6c645e828e1cb00421938e2a45533e32533e0314186ae22ff3fce80cb6e29d26 namespace=k8s.io Dec 16 09:40:53.469213 containerd[1495]: time="2024-12-16T09:40:53.469153313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:53.591538 kubelet[2715]: I1216 09:40:53.579791 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-784949988f-cf4qh" podStartSLOduration=3.572937084 podStartE2EDuration="3.572937084s" podCreationTimestamp="2024-12-16 09:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:40:53.572537531 +0000 UTC m=+77.573993167" watchObservedRunningTime="2024-12-16 09:40:53.572937084 +0000 UTC m=+77.574392741" Dec 16 09:40:53.593173 containerd[1495]: time="2024-12-16T09:40:53.592840315Z" level=info msg="CreateContainer within sandbox \"db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 09:40:53.612086 containerd[1495]: time="2024-12-16T09:40:53.611971202Z" level=info msg="CreateContainer within sandbox \"db40c7822c2c7ce11b6389a6efca7d7d4b1c755162ef6331c74e9fb051d56c1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"feb965f41f3d8561d0bc53a6154fdcd5cdf7741a69e0cffbbd3b5382656b821f\"" Dec 16 09:40:53.613840 containerd[1495]: time="2024-12-16T09:40:53.613530031Z" level=info msg="StartContainer for \"feb965f41f3d8561d0bc53a6154fdcd5cdf7741a69e0cffbbd3b5382656b821f\"" Dec 16 09:40:53.671627 systemd[1]: Started cri-containerd-feb965f41f3d8561d0bc53a6154fdcd5cdf7741a69e0cffbbd3b5382656b821f.scope - libcontainer container feb965f41f3d8561d0bc53a6154fdcd5cdf7741a69e0cffbbd3b5382656b821f. Dec 16 09:40:53.718146 containerd[1495]: time="2024-12-16T09:40:53.717786096Z" level=info msg="StartContainer for \"feb965f41f3d8561d0bc53a6154fdcd5cdf7741a69e0cffbbd3b5382656b821f\" returns successfully" Dec 16 09:40:54.144413 containerd[1495]: time="2024-12-16T09:40:54.143713565Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.232 [INFO][4835] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.232 [INFO][4835] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" iface="eth0" netns="/var/run/netns/cni-276bf84a-a537-c04d-4df1-9628370b7ecc" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.233 [INFO][4835] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" iface="eth0" netns="/var/run/netns/cni-276bf84a-a537-c04d-4df1-9628370b7ecc" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.234 [INFO][4835] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" iface="eth0" netns="/var/run/netns/cni-276bf84a-a537-c04d-4df1-9628370b7ecc" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.234 [INFO][4835] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.234 [INFO][4835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.260 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.261 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.261 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.272 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.272 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.274 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:54.281671 containerd[1495]: 2024-12-16 09:40:54.279 [INFO][4835] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:40:54.282885 containerd[1495]: time="2024-12-16T09:40:54.282220849Z" level=info msg="TearDown network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" successfully" Dec 16 09:40:54.282885 containerd[1495]: time="2024-12-16T09:40:54.282244394Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" returns successfully" Dec 16 09:40:54.283219 containerd[1495]: time="2024-12-16T09:40:54.283168593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vspcs,Uid:2b1e1fb6-74af-4597-8a80-d045a1736cc2,Namespace:kube-system,Attempt:1,}" Dec 16 09:40:54.289326 systemd[1]: run-netns-cni\x2d276bf84a\x2da537\x2dc04d\x2d4df1\x2d9628370b7ecc.mount: Deactivated successfully. Dec 16 09:40:54.470247 systemd-networkd[1393]: calif65b6e7a71f: Link UP Dec 16 09:40:54.470497 systemd-networkd[1393]: calif65b6e7a71f: Gained carrier Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.344 [INFO][4850] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.360 [INFO][4850] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0 coredns-6f6b679f8f- kube-system 2b1e1fb6-74af-4597-8a80-d045a1736cc2 1003 0 2024-12-16 09:39:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-4-1bd0c0376a coredns-6f6b679f8f-vspcs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif65b6e7a71f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.360 [INFO][4850] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.407 [INFO][4861] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" HandleID="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.419 [INFO][4861] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" HandleID="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-4-1bd0c0376a", "pod":"coredns-6f6b679f8f-vspcs", "timestamp":"2024-12-16 09:40:54.406952282 +0000 UTC"}, Hostname:"ci-4081-2-1-4-1bd0c0376a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.419 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.419 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.419 [INFO][4861] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-1bd0c0376a' Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.421 [INFO][4861] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.425 [INFO][4861] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.433 [INFO][4861] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.436 [INFO][4861] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.439 [INFO][4861] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.439 [INFO][4861] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.441 [INFO][4861] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62 Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.446 [INFO][4861] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.451 [INFO][4861] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.1/26] block=192.168.121.0/26 handle="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.451 [INFO][4861] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.1/26] handle="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.452 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:54.493154 containerd[1495]: 2024-12-16 09:40:54.452 [INFO][4861] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.1/26] IPv6=[] ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" HandleID="k8s-pod-network.ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.494336 containerd[1495]: 2024-12-16 09:40:54.456 [INFO][4850] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b1e1fb6-74af-4597-8a80-d045a1736cc2", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"", Pod:"coredns-6f6b679f8f-vspcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65b6e7a71f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:54.494336 containerd[1495]: 2024-12-16 09:40:54.456 [INFO][4850] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.1/32] ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.494336 containerd[1495]: 2024-12-16 09:40:54.456 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif65b6e7a71f ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.494336 containerd[1495]: 2024-12-16 09:40:54.470 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.494336 containerd[1495]: 2024-12-16 09:40:54.471 [INFO][4850] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b1e1fb6-74af-4597-8a80-d045a1736cc2", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62", Pod:"coredns-6f6b679f8f-vspcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65b6e7a71f", MAC:"32:8a:b6:fa:31:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:54.494336 containerd[1495]: 2024-12-16 09:40:54.485 [INFO][4850] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62" Namespace="kube-system" Pod="coredns-6f6b679f8f-vspcs" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:40:54.541083 containerd[1495]: time="2024-12-16T09:40:54.539716146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:54.541083 containerd[1495]: time="2024-12-16T09:40:54.540349632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:54.541083 containerd[1495]: time="2024-12-16T09:40:54.540366705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:54.541083 containerd[1495]: time="2024-12-16T09:40:54.540450288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:54.568019 systemd[1]: Started cri-containerd-ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62.scope - libcontainer container ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62. Dec 16 09:40:54.656419 containerd[1495]: time="2024-12-16T09:40:54.656370701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vspcs,Uid:2b1e1fb6-74af-4597-8a80-d045a1736cc2,Namespace:kube-system,Attempt:1,} returns sandbox id \"ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62\"" Dec 16 09:40:54.659826 containerd[1495]: time="2024-12-16T09:40:54.659681100Z" level=info msg="CreateContainer within sandbox \"ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 09:40:54.692942 containerd[1495]: time="2024-12-16T09:40:54.692857543Z" level=info msg="CreateContainer within sandbox \"ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd46a05d5a78b67c8eb8edfcfcaaf51f6ea7512d2047c89c52a00c01b442a7fd\"" Dec 16 09:40:54.694809 containerd[1495]: time="2024-12-16T09:40:54.694204200Z" level=info msg="StartContainer for \"bd46a05d5a78b67c8eb8edfcfcaaf51f6ea7512d2047c89c52a00c01b442a7fd\"" Dec 16 09:40:54.723049 systemd[1]: Started cri-containerd-bd46a05d5a78b67c8eb8edfcfcaaf51f6ea7512d2047c89c52a00c01b442a7fd.scope - libcontainer container bd46a05d5a78b67c8eb8edfcfcaaf51f6ea7512d2047c89c52a00c01b442a7fd. Dec 16 09:40:54.756015 containerd[1495]: time="2024-12-16T09:40:54.755973131Z" level=info msg="StartContainer for \"bd46a05d5a78b67c8eb8edfcfcaaf51f6ea7512d2047c89c52a00c01b442a7fd\" returns successfully" Dec 16 09:40:55.144151 containerd[1495]: time="2024-12-16T09:40:55.144068692Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:40:55.246199 kubelet[2715]: I1216 09:40:55.245214 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8fm2x" podStartSLOduration=4.2451897259999996 podStartE2EDuration="4.245189726s" podCreationTimestamp="2024-12-16 09:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:40:54.601105051 +0000 UTC m=+78.602560678" watchObservedRunningTime="2024-12-16 09:40:55.245189726 +0000 UTC m=+79.246645363" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.240 [INFO][5017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.240 [INFO][5017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" iface="eth0" netns="/var/run/netns/cni-6ce65b72-d94b-c73f-614f-a7eef5d58dfc" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.241 [INFO][5017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" iface="eth0" netns="/var/run/netns/cni-6ce65b72-d94b-c73f-614f-a7eef5d58dfc" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.241 [INFO][5017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" iface="eth0" netns="/var/run/netns/cni-6ce65b72-d94b-c73f-614f-a7eef5d58dfc" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.241 [INFO][5017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.241 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.290 [INFO][5047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.290 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.290 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.298 [WARNING][5047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.298 [INFO][5047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.300 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:55.309414 containerd[1495]: 2024-12-16 09:40:55.306 [INFO][5017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:40:55.311631 containerd[1495]: time="2024-12-16T09:40:55.310885048Z" level=info msg="TearDown network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" successfully" Dec 16 09:40:55.311631 containerd[1495]: time="2024-12-16T09:40:55.310913431Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" returns successfully" Dec 16 09:40:55.314025 containerd[1495]: time="2024-12-16T09:40:55.313990600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kmszk,Uid:a579e978-3399-4d3d-9c17-650bcb6672c6,Namespace:kube-system,Attempt:1,}" Dec 16 09:40:55.317536 systemd[1]: run-netns-cni\x2d6ce65b72\x2dd94b\x2dc73f\x2d614f\x2da7eef5d58dfc.mount: Deactivated successfully. Dec 16 09:40:55.508765 systemd-networkd[1393]: cali39619eaa86f: Link UP Dec 16 09:40:55.513082 systemd-networkd[1393]: cali39619eaa86f: Gained carrier Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.406 [INFO][5093] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.423 [INFO][5093] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0 coredns-6f6b679f8f- kube-system a579e978-3399-4d3d-9c17-650bcb6672c6 1016 0 2024-12-16 09:39:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-4-1bd0c0376a coredns-6f6b679f8f-kmszk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali39619eaa86f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.423 [INFO][5093] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.455 [INFO][5104] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" HandleID="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.466 [INFO][5104] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" HandleID="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-4-1bd0c0376a", "pod":"coredns-6f6b679f8f-kmszk", "timestamp":"2024-12-16 09:40:55.455922803 +0000 UTC"}, Hostname:"ci-4081-2-1-4-1bd0c0376a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.466 [INFO][5104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.466 [INFO][5104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.466 [INFO][5104] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-1bd0c0376a' Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.469 [INFO][5104] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.475 [INFO][5104] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.479 [INFO][5104] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.481 [INFO][5104] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.486 [INFO][5104] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.486 [INFO][5104] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.487 [INFO][5104] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.492 [INFO][5104] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.498 [INFO][5104] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.2/26] block=192.168.121.0/26 handle="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.498 [INFO][5104] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.2/26] handle="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.498 [INFO][5104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:55.533681 containerd[1495]: 2024-12-16 09:40:55.498 [INFO][5104] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.2/26] IPv6=[] ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" HandleID="k8s-pod-network.986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.534915 containerd[1495]: 2024-12-16 09:40:55.504 [INFO][5093] cni-plugin/k8s.go 386: Populated endpoint ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a579e978-3399-4d3d-9c17-650bcb6672c6", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"", Pod:"coredns-6f6b679f8f-kmszk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39619eaa86f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:55.534915 containerd[1495]: 2024-12-16 09:40:55.504 [INFO][5093] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.2/32] ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.534915 containerd[1495]: 2024-12-16 09:40:55.504 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39619eaa86f ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.534915 containerd[1495]: 2024-12-16 09:40:55.511 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.534915 containerd[1495]: 2024-12-16 09:40:55.511 [INFO][5093] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a579e978-3399-4d3d-9c17-650bcb6672c6", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d", Pod:"coredns-6f6b679f8f-kmszk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39619eaa86f", MAC:"42:1c:b5:f7:f8:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:55.534915 containerd[1495]: 2024-12-16 09:40:55.523 [INFO][5093] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d" Namespace="kube-system" Pod="coredns-6f6b679f8f-kmszk" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:40:55.607523 kubelet[2715]: I1216 09:40:55.606965 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vspcs" podStartSLOduration=74.606911954 podStartE2EDuration="1m14.606911954s" podCreationTimestamp="2024-12-16 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:40:55.604919877 +0000 UTC m=+79.606375515" watchObservedRunningTime="2024-12-16 09:40:55.606911954 +0000 UTC m=+79.608367592" Dec 16 09:40:55.645048 containerd[1495]: time="2024-12-16T09:40:55.644836304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:55.648992 containerd[1495]: time="2024-12-16T09:40:55.648925052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:55.652737 containerd[1495]: time="2024-12-16T09:40:55.649420160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:55.658187 containerd[1495]: time="2024-12-16T09:40:55.657836186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:55.714615 systemd[1]: Started cri-containerd-986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d.scope - libcontainer container 986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d. Dec 16 09:40:55.776642 containerd[1495]: time="2024-12-16T09:40:55.776537188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kmszk,Uid:a579e978-3399-4d3d-9c17-650bcb6672c6,Namespace:kube-system,Attempt:1,} returns sandbox id \"986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d\"" Dec 16 09:40:55.784127 containerd[1495]: time="2024-12-16T09:40:55.784040296Z" level=info msg="CreateContainer within sandbox \"986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 09:40:55.799484 containerd[1495]: time="2024-12-16T09:40:55.799411539Z" level=info msg="CreateContainer within sandbox \"986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12791e3bab0e1f0076ac710e19c80cb95aab2e55fae346effa48989005a7e041\"" Dec 16 09:40:55.801550 containerd[1495]: time="2024-12-16T09:40:55.800086496Z" level=info msg="StartContainer for \"12791e3bab0e1f0076ac710e19c80cb95aab2e55fae346effa48989005a7e041\"" Dec 16 09:40:55.878142 systemd[1]: Started cri-containerd-12791e3bab0e1f0076ac710e19c80cb95aab2e55fae346effa48989005a7e041.scope - libcontainer container 12791e3bab0e1f0076ac710e19c80cb95aab2e55fae346effa48989005a7e041. Dec 16 09:40:55.916522 containerd[1495]: time="2024-12-16T09:40:55.916480902Z" level=info msg="StartContainer for \"12791e3bab0e1f0076ac710e19c80cb95aab2e55fae346effa48989005a7e041\" returns successfully" Dec 16 09:40:55.990777 kernel: bpftool[5262]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 16 09:40:56.303494 systemd-networkd[1393]: vxlan.calico: Link UP Dec 16 09:40:56.303668 systemd-networkd[1393]: vxlan.calico: Gained carrier Dec 16 09:40:56.330895 systemd-networkd[1393]: calif65b6e7a71f: Gained IPv6LL Dec 16 09:40:56.596710 kubelet[2715]: I1216 09:40:56.596431 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kmszk" podStartSLOduration=75.59641488 podStartE2EDuration="1m15.59641488s" podCreationTimestamp="2024-12-16 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:40:56.594658499 +0000 UTC m=+80.596114137" watchObservedRunningTime="2024-12-16 09:40:56.59641488 +0000 UTC m=+80.597870518" Dec 16 09:40:56.840442 systemd-networkd[1393]: cali39619eaa86f: Gained IPv6LL Dec 16 09:40:57.736117 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Dec 16 09:40:58.143602 containerd[1495]: time="2024-12-16T09:40:58.143394917Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.263 [INFO][5353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.264 [INFO][5353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" iface="eth0" netns="/var/run/netns/cni-95b4d2b4-01c2-b6ad-91d7-aaa2296dba85" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.264 [INFO][5353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" iface="eth0" netns="/var/run/netns/cni-95b4d2b4-01c2-b6ad-91d7-aaa2296dba85" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.265 [INFO][5353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" iface="eth0" netns="/var/run/netns/cni-95b4d2b4-01c2-b6ad-91d7-aaa2296dba85" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.265 [INFO][5353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.265 [INFO][5353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.304 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.305 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.305 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.321 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.321 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.323 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:58.331195 containerd[1495]: 2024-12-16 09:40:58.327 [INFO][5353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:40:58.333881 containerd[1495]: time="2024-12-16T09:40:58.332826333Z" level=info msg="TearDown network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" successfully" Dec 16 09:40:58.333881 containerd[1495]: time="2024-12-16T09:40:58.332851812Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" returns successfully" Dec 16 09:40:58.333881 containerd[1495]: time="2024-12-16T09:40:58.333539393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-dvxqf,Uid:e8eba98c-945c-4206-8249-239323f54417,Namespace:calico-apiserver,Attempt:1,}" Dec 16 09:40:58.336013 systemd[1]: run-netns-cni\x2d95b4d2b4\x2d01c2\x2db6ad\x2d91d7\x2daaa2296dba85.mount: Deactivated successfully. Dec 16 09:40:58.489087 systemd-networkd[1393]: calibaa482b5c3b: Link UP Dec 16 09:40:58.490446 systemd-networkd[1393]: calibaa482b5c3b: Gained carrier Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.400 [INFO][5366] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0 calico-apiserver-58b65f65c5- calico-apiserver e8eba98c-945c-4206-8249-239323f54417 1048 0 2024-12-16 09:39:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58b65f65c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-4-1bd0c0376a calico-apiserver-58b65f65c5-dvxqf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibaa482b5c3b [] []}} ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.400 [INFO][5366] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.438 [INFO][5377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" HandleID="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.450 [INFO][5377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" HandleID="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000338d30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-4-1bd0c0376a", "pod":"calico-apiserver-58b65f65c5-dvxqf", "timestamp":"2024-12-16 09:40:58.438415969 +0000 UTC"}, Hostname:"ci-4081-2-1-4-1bd0c0376a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.450 [INFO][5377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.450 [INFO][5377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.450 [INFO][5377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-1bd0c0376a' Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.452 [INFO][5377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.455 [INFO][5377] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.461 [INFO][5377] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.463 [INFO][5377] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.465 [INFO][5377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.466 [INFO][5377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.468 [INFO][5377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.473 [INFO][5377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.479 [INFO][5377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.3/26] block=192.168.121.0/26 handle="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.479 [INFO][5377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.3/26] handle="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.479 [INFO][5377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:58.511855 containerd[1495]: 2024-12-16 09:40:58.479 [INFO][5377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.3/26] IPv6=[] ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" HandleID="k8s-pod-network.172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.513108 containerd[1495]: 2024-12-16 09:40:58.483 [INFO][5366] cni-plugin/k8s.go 386: Populated endpoint ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8eba98c-945c-4206-8249-239323f54417", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"", Pod:"calico-apiserver-58b65f65c5-dvxqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibaa482b5c3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:58.513108 containerd[1495]: 2024-12-16 09:40:58.484 [INFO][5366] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.3/32] ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.513108 containerd[1495]: 2024-12-16 09:40:58.484 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibaa482b5c3b ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.513108 containerd[1495]: 2024-12-16 09:40:58.490 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.513108 containerd[1495]: 2024-12-16 09:40:58.492 [INFO][5366] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8eba98c-945c-4206-8249-239323f54417", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d", Pod:"calico-apiserver-58b65f65c5-dvxqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibaa482b5c3b", MAC:"52:f7:cf:2a:46:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:58.513108 containerd[1495]: 2024-12-16 09:40:58.507 [INFO][5366] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-dvxqf" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:40:58.540533 containerd[1495]: time="2024-12-16T09:40:58.540243839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:58.540533 containerd[1495]: time="2024-12-16T09:40:58.540306821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:58.540533 containerd[1495]: time="2024-12-16T09:40:58.540338012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:58.540533 containerd[1495]: time="2024-12-16T09:40:58.540461540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:58.572899 systemd[1]: Started cri-containerd-172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d.scope - libcontainer container 172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d. Dec 16 09:40:58.630225 containerd[1495]: time="2024-12-16T09:40:58.629580340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-dvxqf,Uid:e8eba98c-945c-4206-8249-239323f54417,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d\"" Dec 16 09:40:58.638684 containerd[1495]: time="2024-12-16T09:40:58.638623032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 16 09:40:59.141869 containerd[1495]: time="2024-12-16T09:40:59.141783444Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.191 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.192 [INFO][5449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" iface="eth0" netns="/var/run/netns/cni-90563714-8862-9169-3da7-e42a8453dac3" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.192 [INFO][5449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" iface="eth0" netns="/var/run/netns/cni-90563714-8862-9169-3da7-e42a8453dac3" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.192 [INFO][5449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" iface="eth0" netns="/var/run/netns/cni-90563714-8862-9169-3da7-e42a8453dac3" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.192 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.193 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.234 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.234 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.234 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.242 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.242 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.244 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:59.252502 containerd[1495]: 2024-12-16 09:40:59.248 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:40:59.255279 containerd[1495]: time="2024-12-16T09:40:59.253369115Z" level=info msg="TearDown network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" successfully" Dec 16 09:40:59.255279 containerd[1495]: time="2024-12-16T09:40:59.253522703Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" returns successfully" Dec 16 09:40:59.255385 containerd[1495]: time="2024-12-16T09:40:59.255193148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-7tp4b,Uid:ccf88f28-2dde-42dd-bdda-69127b53bf8a,Namespace:calico-apiserver,Attempt:1,}" Dec 16 09:40:59.337675 systemd[1]: run-netns-cni\x2d90563714\x2d8862\x2d9169\x2d3da7\x2de42a8453dac3.mount: Deactivated successfully. Dec 16 09:40:59.413159 systemd-networkd[1393]: cali19f94bd291c: Link UP Dec 16 09:40:59.414493 systemd-networkd[1393]: cali19f94bd291c: Gained carrier Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.319 [INFO][5462] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0 calico-apiserver-58b65f65c5- calico-apiserver ccf88f28-2dde-42dd-bdda-69127b53bf8a 1054 0 2024-12-16 09:39:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58b65f65c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-4-1bd0c0376a calico-apiserver-58b65f65c5-7tp4b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19f94bd291c [] []}} ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.319 [INFO][5462] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.367 [INFO][5472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" HandleID="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.375 [INFO][5472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" HandleID="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-4-1bd0c0376a", "pod":"calico-apiserver-58b65f65c5-7tp4b", "timestamp":"2024-12-16 09:40:59.367529834 +0000 UTC"}, Hostname:"ci-4081-2-1-4-1bd0c0376a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.376 [INFO][5472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.376 [INFO][5472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.376 [INFO][5472] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-1bd0c0376a' Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.378 [INFO][5472] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.382 [INFO][5472] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.386 [INFO][5472] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.388 [INFO][5472] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.390 [INFO][5472] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.390 [INFO][5472] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.392 [INFO][5472] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.396 [INFO][5472] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.404 [INFO][5472] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.4/26] block=192.168.121.0/26 handle="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.404 [INFO][5472] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.4/26] handle="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.404 [INFO][5472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:40:59.436766 containerd[1495]: 2024-12-16 09:40:59.405 [INFO][5472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.4/26] IPv6=[] ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" HandleID="k8s-pod-network.54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.439677 containerd[1495]: 2024-12-16 09:40:59.410 [INFO][5462] cni-plugin/k8s.go 386: Populated endpoint ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ccf88f28-2dde-42dd-bdda-69127b53bf8a", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"", Pod:"calico-apiserver-58b65f65c5-7tp4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f94bd291c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:59.439677 containerd[1495]: 2024-12-16 09:40:59.410 [INFO][5462] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.4/32] ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.439677 containerd[1495]: 2024-12-16 09:40:59.410 [INFO][5462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19f94bd291c ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.439677 containerd[1495]: 2024-12-16 09:40:59.414 [INFO][5462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.439677 containerd[1495]: 2024-12-16 09:40:59.414 [INFO][5462] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ccf88f28-2dde-42dd-bdda-69127b53bf8a", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e", Pod:"calico-apiserver-58b65f65c5-7tp4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f94bd291c", MAC:"ea:15:59:94:f0:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:40:59.439677 containerd[1495]: 2024-12-16 09:40:59.429 [INFO][5462] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e" Namespace="calico-apiserver" Pod="calico-apiserver-58b65f65c5-7tp4b" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:40:59.479172 containerd[1495]: time="2024-12-16T09:40:59.478895068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:59.479172 containerd[1495]: time="2024-12-16T09:40:59.478977617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:59.479172 containerd[1495]: time="2024-12-16T09:40:59.478990783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:59.479836 containerd[1495]: time="2024-12-16T09:40:59.479705207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:59.508953 systemd[1]: Started cri-containerd-54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e.scope - libcontainer container 54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e. Dec 16 09:40:59.560946 containerd[1495]: time="2024-12-16T09:40:59.560886163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b65f65c5-7tp4b,Uid:ccf88f28-2dde-42dd-bdda-69127b53bf8a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e\"" Dec 16 09:40:59.656281 systemd-networkd[1393]: calibaa482b5c3b: Gained IPv6LL Dec 16 09:41:01.448440 systemd-networkd[1393]: cali19f94bd291c: Gained IPv6LL Dec 16 09:41:01.760919 containerd[1495]: time="2024-12-16T09:41:01.760788863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:01.762888 containerd[1495]: time="2024-12-16T09:41:01.762842410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 16 09:41:01.764924 containerd[1495]: time="2024-12-16T09:41:01.764885618Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:01.775503 containerd[1495]: time="2024-12-16T09:41:01.774980161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:01.781609 containerd[1495]: time="2024-12-16T09:41:01.780190633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.141509469s" Dec 16 09:41:01.781609 containerd[1495]: time="2024-12-16T09:41:01.780244497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 16 09:41:01.791959 containerd[1495]: time="2024-12-16T09:41:01.791901577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 16 09:41:01.795297 containerd[1495]: time="2024-12-16T09:41:01.795247448Z" level=info msg="CreateContainer within sandbox \"172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 16 09:41:01.821348 containerd[1495]: time="2024-12-16T09:41:01.820654189Z" level=info msg="CreateContainer within sandbox \"172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f82a4c864fa641a06593892e2cf67127094a74d57267fbc43982bbe99b4ab2d\"" Dec 16 09:41:01.822644 containerd[1495]: time="2024-12-16T09:41:01.822559279Z" level=info msg="StartContainer for \"9f82a4c864fa641a06593892e2cf67127094a74d57267fbc43982bbe99b4ab2d\"" Dec 16 09:41:01.870893 systemd[1]: Started cri-containerd-9f82a4c864fa641a06593892e2cf67127094a74d57267fbc43982bbe99b4ab2d.scope - libcontainer container 9f82a4c864fa641a06593892e2cf67127094a74d57267fbc43982bbe99b4ab2d. Dec 16 09:41:01.915310 containerd[1495]: time="2024-12-16T09:41:01.915271579Z" level=info msg="StartContainer for \"9f82a4c864fa641a06593892e2cf67127094a74d57267fbc43982bbe99b4ab2d\" returns successfully" Dec 16 09:41:02.318245 containerd[1495]: time="2024-12-16T09:41:02.318167113Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:02.320795 containerd[1495]: time="2024-12-16T09:41:02.320708406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 16 09:41:02.322494 containerd[1495]: time="2024-12-16T09:41:02.322455400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 530.492403ms" Dec 16 09:41:02.322560 containerd[1495]: time="2024-12-16T09:41:02.322486931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 16 09:41:02.326534 containerd[1495]: time="2024-12-16T09:41:02.326496668Z" level=info msg="CreateContainer within sandbox \"54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 16 09:41:02.351953 containerd[1495]: time="2024-12-16T09:41:02.351668858Z" level=info msg="CreateContainer within sandbox \"54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e3c9e591e8e53f47d91a92ae068004820aa5e25210ad45d4cc0542720a99020a\"" Dec 16 09:41:02.352768 containerd[1495]: time="2024-12-16T09:41:02.352676650Z" level=info msg="StartContainer for \"e3c9e591e8e53f47d91a92ae068004820aa5e25210ad45d4cc0542720a99020a\"" Dec 16 09:41:02.392898 systemd[1]: Started cri-containerd-e3c9e591e8e53f47d91a92ae068004820aa5e25210ad45d4cc0542720a99020a.scope - libcontainer container e3c9e591e8e53f47d91a92ae068004820aa5e25210ad45d4cc0542720a99020a. Dec 16 09:41:02.457460 containerd[1495]: time="2024-12-16T09:41:02.457080427Z" level=info msg="StartContainer for \"e3c9e591e8e53f47d91a92ae068004820aa5e25210ad45d4cc0542720a99020a\" returns successfully" Dec 16 09:41:02.650434 kubelet[2715]: I1216 09:41:02.650079 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58b65f65c5-7tp4b" podStartSLOduration=69.89075633 podStartE2EDuration="1m12.650060561s" podCreationTimestamp="2024-12-16 09:39:50 +0000 UTC" firstStartedPulling="2024-12-16 09:40:59.564027897 +0000 UTC m=+83.565483534" lastFinishedPulling="2024-12-16 09:41:02.323332128 +0000 UTC m=+86.324787765" observedRunningTime="2024-12-16 09:41:02.642035026 +0000 UTC m=+86.643490662" watchObservedRunningTime="2024-12-16 09:41:02.650060561 +0000 UTC m=+86.651516198" Dec 16 09:41:02.989884 kubelet[2715]: I1216 09:41:02.989065 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58b65f65c5-dvxqf" podStartSLOduration=69.830197386 podStartE2EDuration="1m12.989042741s" podCreationTimestamp="2024-12-16 09:39:50 +0000 UTC" firstStartedPulling="2024-12-16 09:40:58.632207007 +0000 UTC m=+82.633662644" lastFinishedPulling="2024-12-16 09:41:01.791052363 +0000 UTC m=+85.792507999" observedRunningTime="2024-12-16 09:41:02.684213087 +0000 UTC m=+86.685668723" watchObservedRunningTime="2024-12-16 09:41:02.989042741 +0000 UTC m=+86.990498378" Dec 16 09:41:05.156794 containerd[1495]: time="2024-12-16T09:41:05.156574330Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.234 [INFO][5646] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.235 [INFO][5646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" iface="eth0" netns="/var/run/netns/cni-ca0be596-c4d3-ec6f-f165-bb949b9cbc90" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.236 [INFO][5646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" iface="eth0" netns="/var/run/netns/cni-ca0be596-c4d3-ec6f-f165-bb949b9cbc90" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.236 [INFO][5646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" iface="eth0" netns="/var/run/netns/cni-ca0be596-c4d3-ec6f-f165-bb949b9cbc90" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.237 [INFO][5646] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.237 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.264 [INFO][5652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.264 [INFO][5652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.264 [INFO][5652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.275 [WARNING][5652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.275 [INFO][5652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.277 [INFO][5652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:05.289760 containerd[1495]: 2024-12-16 09:41:05.281 [INFO][5646] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:05.291625 containerd[1495]: time="2024-12-16T09:41:05.290261931Z" level=info msg="TearDown network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" successfully" Dec 16 09:41:05.291625 containerd[1495]: time="2024-12-16T09:41:05.290309403Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" returns successfully" Dec 16 09:41:05.292389 systemd[1]: run-netns-cni\x2dca0be596\x2dc4d3\x2dec6f\x2df165\x2dbb949b9cbc90.mount: Deactivated successfully. Dec 16 09:41:05.483237 kubelet[2715]: I1216 09:41:05.482612 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e2e45b5-d5ad-4866-a437-2038f6559801-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9e2e45b5-d5ad-4866-a437-2038f6559801" (UID: "9e2e45b5-d5ad-4866-a437-2038f6559801"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 09:41:05.484420 kubelet[2715]: I1216 09:41:05.484366 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e2e45b5-d5ad-4866-a437-2038f6559801-tigera-ca-bundle\") pod \"9e2e45b5-d5ad-4866-a437-2038f6559801\" (UID: \"9e2e45b5-d5ad-4866-a437-2038f6559801\") " Dec 16 09:41:05.484508 kubelet[2715]: I1216 09:41:05.484473 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9tz\" (UniqueName: \"kubernetes.io/projected/9e2e45b5-d5ad-4866-a437-2038f6559801-kube-api-access-2l9tz\") pod \"9e2e45b5-d5ad-4866-a437-2038f6559801\" (UID: \"9e2e45b5-d5ad-4866-a437-2038f6559801\") " Dec 16 09:41:05.487993 kubelet[2715]: I1216 09:41:05.487952 2715 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e2e45b5-d5ad-4866-a437-2038f6559801-tigera-ca-bundle\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:41:05.495566 kubelet[2715]: I1216 09:41:05.495512 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2e45b5-d5ad-4866-a437-2038f6559801-kube-api-access-2l9tz" (OuterVolumeSpecName: "kube-api-access-2l9tz") pod "9e2e45b5-d5ad-4866-a437-2038f6559801" (UID: "9e2e45b5-d5ad-4866-a437-2038f6559801"). InnerVolumeSpecName "kube-api-access-2l9tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 09:41:05.497062 systemd[1]: var-lib-kubelet-pods-9e2e45b5\x2dd5ad\x2d4866\x2da437\x2d2038f6559801-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2l9tz.mount: Deactivated successfully. Dec 16 09:41:05.589147 kubelet[2715]: I1216 09:41:05.589069 2715 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2l9tz\" (UniqueName: \"kubernetes.io/projected/9e2e45b5-d5ad-4866-a437-2038f6559801-kube-api-access-2l9tz\") on node \"ci-4081-2-1-4-1bd0c0376a\" DevicePath \"\"" Dec 16 09:41:05.652892 systemd[1]: Removed slice kubepods-besteffort-pod9e2e45b5_d5ad_4866_a437_2038f6559801.slice - libcontainer container kubepods-besteffort-pod9e2e45b5_d5ad_4866_a437_2038f6559801.slice. Dec 16 09:41:05.767056 systemd[1]: Created slice kubepods-besteffort-podd74a112b_42b2_48be_90ff_e13bbb7b2fa0.slice - libcontainer container kubepods-besteffort-podd74a112b_42b2_48be_90ff_e13bbb7b2fa0.slice. Dec 16 09:41:05.790969 kubelet[2715]: I1216 09:41:05.790915 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzl8\" (UniqueName: \"kubernetes.io/projected/d74a112b-42b2-48be-90ff-e13bbb7b2fa0-kube-api-access-2kzl8\") pod \"calico-kube-controllers-64d8998b6d-gpvt8\" (UID: \"d74a112b-42b2-48be-90ff-e13bbb7b2fa0\") " pod="calico-system/calico-kube-controllers-64d8998b6d-gpvt8" Dec 16 09:41:05.790969 kubelet[2715]: I1216 09:41:05.790969 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d74a112b-42b2-48be-90ff-e13bbb7b2fa0-tigera-ca-bundle\") pod \"calico-kube-controllers-64d8998b6d-gpvt8\" (UID: \"d74a112b-42b2-48be-90ff-e13bbb7b2fa0\") " pod="calico-system/calico-kube-controllers-64d8998b6d-gpvt8" Dec 16 09:41:06.103789 containerd[1495]: time="2024-12-16T09:41:06.102015533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d8998b6d-gpvt8,Uid:d74a112b-42b2-48be-90ff-e13bbb7b2fa0,Namespace:calico-system,Attempt:0,}" Dec 16 09:41:06.149954 kubelet[2715]: I1216 09:41:06.149901 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2e45b5-d5ad-4866-a437-2038f6559801" path="/var/lib/kubelet/pods/9e2e45b5-d5ad-4866-a437-2038f6559801/volumes" Dec 16 09:41:06.274584 systemd-networkd[1393]: caliab2fa5ba313: Link UP Dec 16 09:41:06.276640 systemd-networkd[1393]: caliab2fa5ba313: Gained carrier Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.190 [INFO][5661] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0 calico-kube-controllers-64d8998b6d- calico-system d74a112b-42b2-48be-90ff-e13bbb7b2fa0 1120 0 2024-12-16 09:41:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64d8998b6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-4-1bd0c0376a calico-kube-controllers-64d8998b6d-gpvt8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab2fa5ba313 [] []}} ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.191 [INFO][5661] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.226 [INFO][5672] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" HandleID="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.235 [INFO][5672] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" HandleID="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-4-1bd0c0376a", "pod":"calico-kube-controllers-64d8998b6d-gpvt8", "timestamp":"2024-12-16 09:41:06.226540704 +0000 UTC"}, Hostname:"ci-4081-2-1-4-1bd0c0376a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.235 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.235 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.235 [INFO][5672] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-1bd0c0376a' Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.237 [INFO][5672] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.241 [INFO][5672] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.245 [INFO][5672] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.246 [INFO][5672] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.249 [INFO][5672] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.249 [INFO][5672] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.251 [INFO][5672] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2 Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.256 [INFO][5672] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.265 [INFO][5672] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.5/26] block=192.168.121.0/26 handle="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.265 [INFO][5672] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.5/26] handle="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.265 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:06.290592 containerd[1495]: 2024-12-16 09:41:06.265 [INFO][5672] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.5/26] IPv6=[] ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" HandleID="k8s-pod-network.62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" Dec 16 09:41:06.296925 containerd[1495]: 2024-12-16 09:41:06.271 [INFO][5661] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0", GenerateName:"calico-kube-controllers-64d8998b6d-", Namespace:"calico-system", SelfLink:"", UID:"d74a112b-42b2-48be-90ff-e13bbb7b2fa0", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d8998b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"", Pod:"calico-kube-controllers-64d8998b6d-gpvt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab2fa5ba313", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:06.296925 containerd[1495]: 2024-12-16 09:41:06.271 [INFO][5661] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.5/32] ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" Dec 16 09:41:06.296925 containerd[1495]: 2024-12-16 09:41:06.271 [INFO][5661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab2fa5ba313 ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" Dec 16 09:41:06.296925 containerd[1495]: 2024-12-16 09:41:06.273 [INFO][5661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" Dec 16 09:41:06.296925 containerd[1495]: 2024-12-16 09:41:06.273 [INFO][5661] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0", GenerateName:"calico-kube-controllers-64d8998b6d-", Namespace:"calico-system", SelfLink:"", UID:"d74a112b-42b2-48be-90ff-e13bbb7b2fa0", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d8998b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2", Pod:"calico-kube-controllers-64d8998b6d-gpvt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab2fa5ba313", MAC:"be:1d:d6:06:8d:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:06.296925 containerd[1495]: 2024-12-16 09:41:06.285 [INFO][5661] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2" Namespace="calico-system" Pod="calico-kube-controllers-64d8998b6d-gpvt8" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--64d8998b6d--gpvt8-eth0" Dec 16 09:41:06.333920 containerd[1495]: time="2024-12-16T09:41:06.333588215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:41:06.333920 containerd[1495]: time="2024-12-16T09:41:06.333652769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:41:06.333920 containerd[1495]: time="2024-12-16T09:41:06.333670344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:41:06.333920 containerd[1495]: time="2024-12-16T09:41:06.333784656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:41:06.389875 systemd[1]: Started cri-containerd-62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2.scope - libcontainer container 62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2. Dec 16 09:41:06.438692 containerd[1495]: time="2024-12-16T09:41:06.438652192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d8998b6d-gpvt8,Uid:d74a112b-42b2-48be-90ff-e13bbb7b2fa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2\"" Dec 16 09:41:06.440687 containerd[1495]: time="2024-12-16T09:41:06.440422301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 16 09:41:07.143146 containerd[1495]: time="2024-12-16T09:41:07.142543695Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.218 [INFO][5758] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.218 [INFO][5758] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" iface="eth0" netns="/var/run/netns/cni-3d32ca73-5a86-f189-c58b-b941e8b44a1a" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.219 [INFO][5758] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" iface="eth0" netns="/var/run/netns/cni-3d32ca73-5a86-f189-c58b-b941e8b44a1a" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.219 [INFO][5758] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" iface="eth0" netns="/var/run/netns/cni-3d32ca73-5a86-f189-c58b-b941e8b44a1a" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.219 [INFO][5758] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.219 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.246 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.246 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.246 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.255 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.255 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.257 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:07.265786 containerd[1495]: 2024-12-16 09:41:07.260 [INFO][5758] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:07.265786 containerd[1495]: time="2024-12-16T09:41:07.264917550Z" level=info msg="TearDown network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" successfully" Dec 16 09:41:07.265786 containerd[1495]: time="2024-12-16T09:41:07.264956165Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" returns successfully" Dec 16 09:41:07.268048 containerd[1495]: time="2024-12-16T09:41:07.268018769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6lhq,Uid:b184f5aa-f13a-4907-82b2-11f9a166985b,Namespace:calico-system,Attempt:1,}" Dec 16 09:41:07.275366 systemd[1]: run-netns-cni\x2d3d32ca73\x2d5a86\x2df189\x2dc58b\x2db941e8b44a1a.mount: Deactivated successfully. Dec 16 09:41:07.434703 systemd-networkd[1393]: cali7724454515d: Link UP Dec 16 09:41:07.436921 systemd-networkd[1393]: cali7724454515d: Gained carrier Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.355 [INFO][5772] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0 csi-node-driver- calico-system b184f5aa-f13a-4907-82b2-11f9a166985b 1127 0 2024-12-16 09:39:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-4-1bd0c0376a csi-node-driver-s6lhq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7724454515d [] []}} ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.355 [INFO][5772] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.393 [INFO][5782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" HandleID="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.402 [INFO][5782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" HandleID="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-4-1bd0c0376a", "pod":"csi-node-driver-s6lhq", "timestamp":"2024-12-16 09:41:07.393496418 +0000 UTC"}, Hostname:"ci-4081-2-1-4-1bd0c0376a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.402 [INFO][5782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.402 [INFO][5782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.402 [INFO][5782] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-4-1bd0c0376a' Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.404 [INFO][5782] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.408 [INFO][5782] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.412 [INFO][5782] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.413 [INFO][5782] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.415 [INFO][5782] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.415 [INFO][5782] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.417 [INFO][5782] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836 Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.421 [INFO][5782] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.427 [INFO][5782] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.6/26] block=192.168.121.0/26 handle="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.427 [INFO][5782] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.6/26] handle="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" host="ci-4081-2-1-4-1bd0c0376a" Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.427 [INFO][5782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:07.461143 containerd[1495]: 2024-12-16 09:41:07.427 [INFO][5782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.6/26] IPv6=[] ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" HandleID="k8s-pod-network.5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.463715 containerd[1495]: 2024-12-16 09:41:07.431 [INFO][5772] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b184f5aa-f13a-4907-82b2-11f9a166985b", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"", Pod:"csi-node-driver-s6lhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7724454515d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:07.463715 containerd[1495]: 2024-12-16 09:41:07.431 [INFO][5772] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.6/32] ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.463715 containerd[1495]: 2024-12-16 09:41:07.432 [INFO][5772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7724454515d ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.463715 containerd[1495]: 2024-12-16 09:41:07.437 [INFO][5772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.463715 containerd[1495]: 2024-12-16 09:41:07.438 [INFO][5772] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b184f5aa-f13a-4907-82b2-11f9a166985b", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836", Pod:"csi-node-driver-s6lhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7724454515d", MAC:"ba:85:a2:05:11:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:07.463715 containerd[1495]: 2024-12-16 09:41:07.454 [INFO][5772] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836" Namespace="calico-system" Pod="csi-node-driver-s6lhq" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:07.507094 containerd[1495]: time="2024-12-16T09:41:07.506962033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:41:07.507094 containerd[1495]: time="2024-12-16T09:41:07.507025988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:41:07.507094 containerd[1495]: time="2024-12-16T09:41:07.507042770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:41:07.508777 containerd[1495]: time="2024-12-16T09:41:07.507621241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:41:07.535892 systemd[1]: Started cri-containerd-5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836.scope - libcontainer container 5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836. Dec 16 09:41:07.569066 containerd[1495]: time="2024-12-16T09:41:07.568992000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6lhq,Uid:b184f5aa-f13a-4907-82b2-11f9a166985b,Namespace:calico-system,Attempt:1,} returns sandbox id \"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836\"" Dec 16 09:41:07.720194 systemd-networkd[1393]: caliab2fa5ba313: Gained IPv6LL Dec 16 09:41:07.924004 systemd[1]: Started sshd@7-5.75.242.71:22-41.211.137.55:52610.service - OpenSSH per-connection server daemon (41.211.137.55:52610). Dec 16 09:41:08.488068 systemd-networkd[1393]: cali7724454515d: Gained IPv6LL Dec 16 09:41:08.734134 sshd[5846]: Connection closed by authenticating user root 41.211.137.55 port 52610 [preauth] Dec 16 09:41:08.738673 systemd[1]: sshd@7-5.75.242.71:22-41.211.137.55:52610.service: Deactivated successfully. Dec 16 09:41:10.336471 containerd[1495]: time="2024-12-16T09:41:10.336387857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:10.337545 containerd[1495]: time="2024-12-16T09:41:10.337478180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 16 09:41:10.338810 containerd[1495]: time="2024-12-16T09:41:10.338675791Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:10.341097 containerd[1495]: time="2024-12-16T09:41:10.341067196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:10.343000 containerd[1495]: time="2024-12-16T09:41:10.342166655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.901707572s" Dec 16 09:41:10.343000 containerd[1495]: time="2024-12-16T09:41:10.342209609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 16 09:41:10.343912 containerd[1495]: time="2024-12-16T09:41:10.343882620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 16 09:41:10.363645 containerd[1495]: time="2024-12-16T09:41:10.363580071Z" level=info msg="CreateContainer within sandbox \"62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 16 09:41:10.383053 containerd[1495]: time="2024-12-16T09:41:10.382996146Z" level=info msg="CreateContainer within sandbox \"62635971fdf3a70955b0e24f97b91e5386808655515a388203e38b8b77a848e2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad4a13d7cbbd678a89a584aedbfa2cecdb95fc1878a8e408d72c1ac68cd68495\"" Dec 16 09:41:10.385279 containerd[1495]: time="2024-12-16T09:41:10.384241759Z" level=info msg="StartContainer for \"ad4a13d7cbbd678a89a584aedbfa2cecdb95fc1878a8e408d72c1ac68cd68495\"" Dec 16 09:41:10.422016 systemd[1]: Started cri-containerd-ad4a13d7cbbd678a89a584aedbfa2cecdb95fc1878a8e408d72c1ac68cd68495.scope - libcontainer container ad4a13d7cbbd678a89a584aedbfa2cecdb95fc1878a8e408d72c1ac68cd68495. Dec 16 09:41:10.486347 containerd[1495]: time="2024-12-16T09:41:10.486311431Z" level=info msg="StartContainer for \"ad4a13d7cbbd678a89a584aedbfa2cecdb95fc1878a8e408d72c1ac68cd68495\" returns successfully" Dec 16 09:41:10.778668 kubelet[2715]: I1216 09:41:10.778270 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64d8998b6d-gpvt8" podStartSLOduration=1.875195398 podStartE2EDuration="5.778250903s" podCreationTimestamp="2024-12-16 09:41:05 +0000 UTC" firstStartedPulling="2024-12-16 09:41:06.440099796 +0000 UTC m=+90.441555433" lastFinishedPulling="2024-12-16 09:41:10.343155311 +0000 UTC m=+94.344610938" observedRunningTime="2024-12-16 09:41:10.707364469 +0000 UTC m=+94.708820106" watchObservedRunningTime="2024-12-16 09:41:10.778250903 +0000 UTC m=+94.779706540" Dec 16 09:41:12.202951 containerd[1495]: time="2024-12-16T09:41:12.202858812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:12.204175 containerd[1495]: time="2024-12-16T09:41:12.204125016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 16 09:41:12.205082 containerd[1495]: time="2024-12-16T09:41:12.205040089Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:12.208002 containerd[1495]: time="2024-12-16T09:41:12.207234842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:12.208002 containerd[1495]: time="2024-12-16T09:41:12.207879591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.86396594s" Dec 16 09:41:12.208002 containerd[1495]: time="2024-12-16T09:41:12.207905422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 16 09:41:12.211875 containerd[1495]: time="2024-12-16T09:41:12.211836180Z" level=info msg="CreateContainer within sandbox \"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 16 09:41:12.312350 containerd[1495]: time="2024-12-16T09:41:12.312219849Z" level=info msg="CreateContainer within sandbox \"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"060926cfc63ee237cd5df3d1cdb88ca657135660aa3aa618686d749e4335049c\"" Dec 16 09:41:12.323768 containerd[1495]: time="2024-12-16T09:41:12.323488527Z" level=info msg="StartContainer for \"060926cfc63ee237cd5df3d1cdb88ca657135660aa3aa618686d749e4335049c\"" Dec 16 09:41:12.361131 systemd[1]: Started cri-containerd-060926cfc63ee237cd5df3d1cdb88ca657135660aa3aa618686d749e4335049c.scope - libcontainer container 060926cfc63ee237cd5df3d1cdb88ca657135660aa3aa618686d749e4335049c. Dec 16 09:41:12.414143 containerd[1495]: time="2024-12-16T09:41:12.413662682Z" level=info msg="StartContainer for \"060926cfc63ee237cd5df3d1cdb88ca657135660aa3aa618686d749e4335049c\" returns successfully" Dec 16 09:41:12.416191 containerd[1495]: time="2024-12-16T09:41:12.415997817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 16 09:41:14.415105 containerd[1495]: time="2024-12-16T09:41:14.415037926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:14.417116 containerd[1495]: time="2024-12-16T09:41:14.417052330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 16 09:41:14.418834 containerd[1495]: time="2024-12-16T09:41:14.418783075Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:14.421678 containerd[1495]: time="2024-12-16T09:41:14.421628360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:41:14.422688 containerd[1495]: time="2024-12-16T09:41:14.422453669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.006423709s" Dec 16 09:41:14.422688 containerd[1495]: time="2024-12-16T09:41:14.422510258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 16 09:41:14.426272 containerd[1495]: time="2024-12-16T09:41:14.426085728Z" level=info msg="CreateContainer within sandbox \"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 16 09:41:14.454241 containerd[1495]: time="2024-12-16T09:41:14.454184156Z" level=info msg="CreateContainer within sandbox \"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3a3dc6b61006ec25d7685dc7ec622e2a46c532a608818fb416221ac0f15f4772\"" Dec 16 09:41:14.456485 containerd[1495]: time="2024-12-16T09:41:14.456451901Z" level=info msg="StartContainer for \"3a3dc6b61006ec25d7685dc7ec622e2a46c532a608818fb416221ac0f15f4772\"" Dec 16 09:41:14.506197 systemd[1]: Started cri-containerd-3a3dc6b61006ec25d7685dc7ec622e2a46c532a608818fb416221ac0f15f4772.scope - libcontainer container 3a3dc6b61006ec25d7685dc7ec622e2a46c532a608818fb416221ac0f15f4772. Dec 16 09:41:14.547086 containerd[1495]: time="2024-12-16T09:41:14.547006537Z" level=info msg="StartContainer for \"3a3dc6b61006ec25d7685dc7ec622e2a46c532a608818fb416221ac0f15f4772\" returns successfully" Dec 16 09:41:15.688550 kubelet[2715]: I1216 09:41:15.688475 2715 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 16 09:41:15.699200 kubelet[2715]: I1216 09:41:15.699145 2715 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 16 09:41:21.420760 systemd[1]: run-containerd-runc-k8s.io-feb965f41f3d8561d0bc53a6154fdcd5cdf7741a69e0cffbbd3b5382656b821f-runc.b0Mdpb.mount: Deactivated successfully. Dec 16 09:41:21.556698 kubelet[2715]: I1216 09:41:21.555915 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s6lhq" podStartSLOduration=84.71021734199999 podStartE2EDuration="1m31.555891753s" podCreationTimestamp="2024-12-16 09:39:50 +0000 UTC" firstStartedPulling="2024-12-16 09:41:07.577797661 +0000 UTC m=+91.579253298" lastFinishedPulling="2024-12-16 09:41:14.423472072 +0000 UTC m=+98.424927709" observedRunningTime="2024-12-16 09:41:14.72433504 +0000 UTC m=+98.725790687" watchObservedRunningTime="2024-12-16 09:41:21.555891753 +0000 UTC m=+105.557347390" Dec 16 09:41:36.184421 containerd[1495]: time="2024-12-16T09:41:36.184171547Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.377 [WARNING][6061] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a579e978-3399-4d3d-9c17-650bcb6672c6", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d", Pod:"coredns-6f6b679f8f-kmszk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39619eaa86f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.378 [INFO][6061] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.378 [INFO][6061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" iface="eth0" netns="" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.379 [INFO][6061] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.379 [INFO][6061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.411 [INFO][6067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.412 [INFO][6067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.413 [INFO][6067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.426 [WARNING][6067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.426 [INFO][6067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.428 [INFO][6067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:36.437089 containerd[1495]: 2024-12-16 09:41:36.432 [INFO][6061] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.437089 containerd[1495]: time="2024-12-16T09:41:36.437031206Z" level=info msg="TearDown network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" successfully" Dec 16 09:41:36.437089 containerd[1495]: time="2024-12-16T09:41:36.437065912Z" level=info msg="StopPodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" returns successfully" Dec 16 09:41:36.506269 containerd[1495]: time="2024-12-16T09:41:36.506184255Z" level=info msg="RemovePodSandbox for \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:41:36.506269 containerd[1495]: time="2024-12-16T09:41:36.506274093Z" level=info msg="Forcibly stopping sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\"" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.546 [WARNING][6086] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a579e978-3399-4d3d-9c17-650bcb6672c6", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"986605d458042b9f4f91071298c4398dba6b511d7f59d45e470e2040874dfd7d", Pod:"coredns-6f6b679f8f-kmszk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39619eaa86f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.546 [INFO][6086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.546 [INFO][6086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" iface="eth0" netns="" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.546 [INFO][6086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.546 [INFO][6086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.586 [INFO][6092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.586 [INFO][6092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.586 [INFO][6092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.596 [WARNING][6092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.597 [INFO][6092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" HandleID="k8s-pod-network.035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--kmszk-eth0" Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.599 [INFO][6092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:36.606709 containerd[1495]: 2024-12-16 09:41:36.602 [INFO][6086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7" Dec 16 09:41:36.609334 containerd[1495]: time="2024-12-16T09:41:36.607345831Z" level=info msg="TearDown network for sandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" successfully" Dec 16 09:41:36.632259 containerd[1495]: time="2024-12-16T09:41:36.632176489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:36.674533 containerd[1495]: time="2024-12-16T09:41:36.674434830Z" level=info msg="RemovePodSandbox \"035e3a0d7eb34263a8a8d51f49b5bcdc0acbf214ef349e7286e75d2868c6f9e7\" returns successfully" Dec 16 09:41:36.702419 containerd[1495]: time="2024-12-16T09:41:36.701960565Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.762 [WARNING][6110] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8eba98c-945c-4206-8249-239323f54417", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d", Pod:"calico-apiserver-58b65f65c5-dvxqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibaa482b5c3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.762 [INFO][6110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.762 [INFO][6110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" iface="eth0" netns="" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.762 [INFO][6110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.762 [INFO][6110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.799 [INFO][6116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.799 [INFO][6116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.799 [INFO][6116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.819 [WARNING][6116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.819 [INFO][6116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.823 [INFO][6116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:36.836247 containerd[1495]: 2024-12-16 09:41:36.829 [INFO][6110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.838064 containerd[1495]: time="2024-12-16T09:41:36.836447296Z" level=info msg="TearDown network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" successfully" Dec 16 09:41:36.838064 containerd[1495]: time="2024-12-16T09:41:36.836480668Z" level=info msg="StopPodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" returns successfully" Dec 16 09:41:36.840489 containerd[1495]: time="2024-12-16T09:41:36.839868378Z" level=info msg="RemovePodSandbox for \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:41:36.840489 containerd[1495]: time="2024-12-16T09:41:36.839951333Z" level=info msg="Forcibly stopping sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\"" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.903 [WARNING][6141] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8eba98c-945c-4206-8249-239323f54417", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"172de65c1b61767bb5176fb3f4d2a34549e98fc8c59a68e57bc913d3a0bf3f8d", Pod:"calico-apiserver-58b65f65c5-dvxqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibaa482b5c3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.903 [INFO][6141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.903 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" iface="eth0" netns="" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.904 [INFO][6141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.904 [INFO][6141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.932 [INFO][6147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.932 [INFO][6147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.932 [INFO][6147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.939 [WARNING][6147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.939 [INFO][6147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" HandleID="k8s-pod-network.e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--dvxqf-eth0" Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.941 [INFO][6147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:36.949321 containerd[1495]: 2024-12-16 09:41:36.945 [INFO][6141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8" Dec 16 09:41:36.949321 containerd[1495]: time="2024-12-16T09:41:36.949315675Z" level=info msg="TearDown network for sandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" successfully" Dec 16 09:41:36.953914 containerd[1495]: time="2024-12-16T09:41:36.953712086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:36.953914 containerd[1495]: time="2024-12-16T09:41:36.953813887Z" level=info msg="RemovePodSandbox \"e104e82724ccd02d4d1c2805d4253c075f853b467b524ddf126bea46b74abcd8\" returns successfully" Dec 16 09:41:36.954515 containerd[1495]: time="2024-12-16T09:41:36.954486932Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:36.999 [WARNING][6165] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:36.999 [INFO][6165] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:36.999 [INFO][6165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" iface="eth0" netns="" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:36.999 [INFO][6165] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:36.999 [INFO][6165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:37.021 [INFO][6171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:37.021 [INFO][6171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:37.021 [INFO][6171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:37.028 [WARNING][6171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:37.028 [INFO][6171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:37.030 [INFO][6171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.036803 containerd[1495]: 2024-12-16 09:41:37.033 [INFO][6165] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.038662 containerd[1495]: time="2024-12-16T09:41:37.036826228Z" level=info msg="TearDown network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" successfully" Dec 16 09:41:37.038662 containerd[1495]: time="2024-12-16T09:41:37.036859629Z" level=info msg="StopPodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" returns successfully" Dec 16 09:41:37.038662 containerd[1495]: time="2024-12-16T09:41:37.037392164Z" level=info msg="RemovePodSandbox for \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:41:37.038662 containerd[1495]: time="2024-12-16T09:41:37.037423753Z" level=info msg="Forcibly stopping sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\"" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.078 [WARNING][6189] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" WorkloadEndpoint="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.078 [INFO][6189] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.078 [INFO][6189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" iface="eth0" netns="" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.078 [INFO][6189] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.078 [INFO][6189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.107 [INFO][6196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.108 [INFO][6196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.108 [INFO][6196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.118 [WARNING][6196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.118 [INFO][6196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" HandleID="k8s-pod-network.52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--kube--controllers--74864695c7--klzq2-eth0" Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.121 [INFO][6196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.129576 containerd[1495]: 2024-12-16 09:41:37.125 [INFO][6189] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab" Dec 16 09:41:37.132621 containerd[1495]: time="2024-12-16T09:41:37.129632388Z" level=info msg="TearDown network for sandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" successfully" Dec 16 09:41:37.135642 containerd[1495]: time="2024-12-16T09:41:37.135551307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:37.135642 containerd[1495]: time="2024-12-16T09:41:37.135633291Z" level=info msg="RemovePodSandbox \"52f2e4a0400682b907a25d3da710b219871566d86eeae88895462f6233d2eaab\" returns successfully" Dec 16 09:41:37.136458 containerd[1495]: time="2024-12-16T09:41:37.136221668Z" level=info msg="StopPodSandbox for \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\"" Dec 16 09:41:37.136458 containerd[1495]: time="2024-12-16T09:41:37.136309744Z" level=info msg="TearDown network for sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" successfully" Dec 16 09:41:37.136458 containerd[1495]: time="2024-12-16T09:41:37.136326054Z" level=info msg="StopPodSandbox for \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" returns successfully" Dec 16 09:41:37.137400 containerd[1495]: time="2024-12-16T09:41:37.136750507Z" level=info msg="RemovePodSandbox for \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\"" Dec 16 09:41:37.137400 containerd[1495]: time="2024-12-16T09:41:37.136773018Z" level=info msg="Forcibly stopping sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\"" Dec 16 09:41:37.137400 containerd[1495]: time="2024-12-16T09:41:37.136831498Z" level=info msg="TearDown network for sandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" successfully" Dec 16 09:41:37.143258 containerd[1495]: time="2024-12-16T09:41:37.143191630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:37.143378 containerd[1495]: time="2024-12-16T09:41:37.143273623Z" level=info msg="RemovePodSandbox \"fefada17c75ea827ebec98683116e7187ff5d359b06b8fc7d9d155263ca7cf81\" returns successfully" Dec 16 09:41:37.143798 containerd[1495]: time="2024-12-16T09:41:37.143768867Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.188 [WARNING][6214] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b184f5aa-f13a-4907-82b2-11f9a166985b", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836", Pod:"csi-node-driver-s6lhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7724454515d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.188 [INFO][6214] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.188 [INFO][6214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" iface="eth0" netns="" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.188 [INFO][6214] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.188 [INFO][6214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.228 [INFO][6220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.229 [INFO][6220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.229 [INFO][6220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.242 [WARNING][6220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.243 [INFO][6220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.244 [INFO][6220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.251093 containerd[1495]: 2024-12-16 09:41:37.247 [INFO][6214] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.251093 containerd[1495]: time="2024-12-16T09:41:37.250860325Z" level=info msg="TearDown network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" successfully" Dec 16 09:41:37.251093 containerd[1495]: time="2024-12-16T09:41:37.250887426Z" level=info msg="StopPodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" returns successfully" Dec 16 09:41:37.254378 containerd[1495]: time="2024-12-16T09:41:37.251424368Z" level=info msg="RemovePodSandbox for \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:41:37.254378 containerd[1495]: time="2024-12-16T09:41:37.251453432Z" level=info msg="Forcibly stopping sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\"" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.295 [WARNING][6238] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b184f5aa-f13a-4907-82b2-11f9a166985b", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"5e597f68ef5cf9bc677a24a59c137bd39bb20ab245510cc793989a8dccd73836", Pod:"csi-node-driver-s6lhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7724454515d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.296 [INFO][6238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.296 [INFO][6238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" iface="eth0" netns="" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.296 [INFO][6238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.296 [INFO][6238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.321 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.321 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.321 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.329 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.330 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" HandleID="k8s-pod-network.f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-csi--node--driver--s6lhq-eth0" Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.332 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.344790 containerd[1495]: 2024-12-16 09:41:37.338 [INFO][6238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664" Dec 16 09:41:37.344790 containerd[1495]: time="2024-12-16T09:41:37.343452427Z" level=info msg="TearDown network for sandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" successfully" Dec 16 09:41:37.348767 containerd[1495]: time="2024-12-16T09:41:37.348689253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:37.348866 containerd[1495]: time="2024-12-16T09:41:37.348789419Z" level=info msg="RemovePodSandbox \"f3f672978cccb89dcf4e890a49ad8409d95617a67d1b1843422728eac3660664\" returns successfully" Dec 16 09:41:37.349496 containerd[1495]: time="2024-12-16T09:41:37.349441706Z" level=info msg="StopPodSandbox for \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\"" Dec 16 09:41:37.349584 containerd[1495]: time="2024-12-16T09:41:37.349545430Z" level=info msg="TearDown network for sandbox \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" successfully" Dec 16 09:41:37.349584 containerd[1495]: time="2024-12-16T09:41:37.349562142Z" level=info msg="StopPodSandbox for \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" returns successfully" Dec 16 09:41:37.351001 containerd[1495]: time="2024-12-16T09:41:37.350009156Z" level=info msg="RemovePodSandbox for \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\"" Dec 16 09:41:37.351001 containerd[1495]: time="2024-12-16T09:41:37.350051796Z" level=info msg="Forcibly stopping sandbox \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\"" Dec 16 09:41:37.351001 containerd[1495]: time="2024-12-16T09:41:37.350149147Z" level=info msg="TearDown network for sandbox \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" successfully" Dec 16 09:41:37.357343 containerd[1495]: time="2024-12-16T09:41:37.357270751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:37.357516 containerd[1495]: time="2024-12-16T09:41:37.357352254Z" level=info msg="RemovePodSandbox \"4daea6f0886b720c97cf2fc782f7331ac86b8609f203dad88da7cf848b1c46bf\" returns successfully" Dec 16 09:41:37.358047 containerd[1495]: time="2024-12-16T09:41:37.358000443Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.415 [WARNING][6262] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b1e1fb6-74af-4597-8a80-d045a1736cc2", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62", Pod:"coredns-6f6b679f8f-vspcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65b6e7a71f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.415 [INFO][6262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.415 [INFO][6262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" iface="eth0" netns="" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.415 [INFO][6262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.415 [INFO][6262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.438 [INFO][6268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.438 [INFO][6268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.438 [INFO][6268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.444 [WARNING][6268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.444 [INFO][6268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.447 [INFO][6268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.454382 containerd[1495]: 2024-12-16 09:41:37.450 [INFO][6262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.455225 containerd[1495]: time="2024-12-16T09:41:37.454410141Z" level=info msg="TearDown network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" successfully" Dec 16 09:41:37.455225 containerd[1495]: time="2024-12-16T09:41:37.454444426Z" level=info msg="StopPodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" returns successfully" Dec 16 09:41:37.455225 containerd[1495]: time="2024-12-16T09:41:37.455019850Z" level=info msg="RemovePodSandbox for \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:41:37.455225 containerd[1495]: time="2024-12-16T09:41:37.455052010Z" level=info msg="Forcibly stopping sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\"" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.499 [WARNING][6286] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b1e1fb6-74af-4597-8a80-d045a1736cc2", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"ae6e22275410005d85ca6e05907e497210ef8001435c130b3ecb07136f56ef62", Pod:"coredns-6f6b679f8f-vspcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65b6e7a71f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.499 [INFO][6286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.499 [INFO][6286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" iface="eth0" netns="" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.499 [INFO][6286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.499 [INFO][6286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.526 [INFO][6292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.526 [INFO][6292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.526 [INFO][6292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.533 [WARNING][6292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.533 [INFO][6292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" HandleID="k8s-pod-network.dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-coredns--6f6b679f8f--vspcs-eth0" Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.535 [INFO][6292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.547022 containerd[1495]: 2024-12-16 09:41:37.542 [INFO][6286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3" Dec 16 09:41:37.547022 containerd[1495]: time="2024-12-16T09:41:37.546992976Z" level=info msg="TearDown network for sandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" successfully" Dec 16 09:41:37.551747 containerd[1495]: time="2024-12-16T09:41:37.551647223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:37.551747 containerd[1495]: time="2024-12-16T09:41:37.551722885Z" level=info msg="RemovePodSandbox \"dcc7fa9270c77d22053f0206b698c00086b7e4b6bc4bb87b3686954561d728e3\" returns successfully" Dec 16 09:41:37.552540 containerd[1495]: time="2024-12-16T09:41:37.552191129Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.590 [WARNING][6311] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ccf88f28-2dde-42dd-bdda-69127b53bf8a", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e", Pod:"calico-apiserver-58b65f65c5-7tp4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f94bd291c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.591 [INFO][6311] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.591 [INFO][6311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" iface="eth0" netns="" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.591 [INFO][6311] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.591 [INFO][6311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.621 [INFO][6317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.621 [INFO][6317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.621 [INFO][6317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.629 [WARNING][6317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.630 [INFO][6317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.633 [INFO][6317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.645823 containerd[1495]: 2024-12-16 09:41:37.638 [INFO][6311] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.645823 containerd[1495]: time="2024-12-16T09:41:37.645597952Z" level=info msg="TearDown network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" successfully" Dec 16 09:41:37.645823 containerd[1495]: time="2024-12-16T09:41:37.645622858Z" level=info msg="StopPodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" returns successfully" Dec 16 09:41:37.646944 containerd[1495]: time="2024-12-16T09:41:37.646882299Z" level=info msg="RemovePodSandbox for \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:41:37.647001 containerd[1495]: time="2024-12-16T09:41:37.646950767Z" level=info msg="Forcibly stopping sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\"" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.690 [WARNING][6335] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0", GenerateName:"calico-apiserver-58b65f65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ccf88f28-2dde-42dd-bdda-69127b53bf8a", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b65f65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-4-1bd0c0376a", ContainerID:"54df8b24e31e5ba438d989fa3e7a0932289d98f3875bf1682f18d57b67ef414e", Pod:"calico-apiserver-58b65f65c5-7tp4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f94bd291c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.691 [INFO][6335] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.691 [INFO][6335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" iface="eth0" netns="" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.691 [INFO][6335] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.691 [INFO][6335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.728 [INFO][6341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.728 [INFO][6341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.728 [INFO][6341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.734 [WARNING][6341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.734 [INFO][6341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" HandleID="k8s-pod-network.4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Workload="ci--4081--2--1--4--1bd0c0376a-k8s-calico--apiserver--58b65f65c5--7tp4b-eth0" Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.736 [INFO][6341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:41:37.744066 containerd[1495]: 2024-12-16 09:41:37.740 [INFO][6335] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436" Dec 16 09:41:37.744066 containerd[1495]: time="2024-12-16T09:41:37.743985682Z" level=info msg="TearDown network for sandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" successfully" Dec 16 09:41:37.748947 containerd[1495]: time="2024-12-16T09:41:37.748765295Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:41:37.748947 containerd[1495]: time="2024-12-16T09:41:37.748833632Z" level=info msg="RemovePodSandbox \"4ed4256908a3dba69d716a2e56f361e29070c57799a8a6cd81408fc41f62f436\" returns successfully" Dec 16 09:42:57.286676 update_engine[1476]: I20241216 09:42:57.286559 1476 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 09:42:57.286676 update_engine[1476]: I20241216 09:42:57.286646 1476 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.289091 1476 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292091 1476 omaha_request_params.cc:62] Current group set to stable Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292304 1476 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292320 1476 update_attempter.cc:643] Scheduling an action processor start. Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292344 1476 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292403 1476 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292505 1476 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292517 1476 omaha_request_action.cc:272] Request: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.292527 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.308579 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.308987 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:42:57.317941 update_engine[1476]: E20241216 09:42:57.312938 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:42:57.317941 update_engine[1476]: I20241216 09:42:57.313012 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 09:42:57.320212 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 09:43:06.139664 systemd[1]: run-containerd-runc-k8s.io-ad4a13d7cbbd678a89a584aedbfa2cecdb95fc1878a8e408d72c1ac68cd68495-runc.Z489U3.mount: Deactivated successfully. Dec 16 09:43:07.156971 update_engine[1476]: I20241216 09:43:07.156818 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:43:07.157917 update_engine[1476]: I20241216 09:43:07.157356 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:43:07.158626 update_engine[1476]: I20241216 09:43:07.157897 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:43:07.158781 update_engine[1476]: E20241216 09:43:07.158687 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:43:07.159272 update_engine[1476]: I20241216 09:43:07.158830 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 09:43:17.162056 update_engine[1476]: I20241216 09:43:17.161928 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:43:17.162974 update_engine[1476]: I20241216 09:43:17.162226 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:43:17.162974 update_engine[1476]: I20241216 09:43:17.162513 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:43:17.163999 update_engine[1476]: E20241216 09:43:17.163208 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:43:17.163999 update_engine[1476]: I20241216 09:43:17.163250 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 09:43:27.161383 update_engine[1476]: I20241216 09:43:27.161180 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:43:27.162231 update_engine[1476]: I20241216 09:43:27.161581 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:43:27.162231 update_engine[1476]: I20241216 09:43:27.161974 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:43:27.162861 update_engine[1476]: E20241216 09:43:27.162798 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:43:27.162959 update_engine[1476]: I20241216 09:43:27.162909 1476 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 09:43:27.162959 update_engine[1476]: I20241216 09:43:27.162928 1476 omaha_request_action.cc:617] Omaha request response: Dec 16 09:43:27.163189 update_engine[1476]: E20241216 09:43:27.163038 1476 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 09:43:27.163189 update_engine[1476]: I20241216 09:43:27.163066 1476 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 09:43:27.163189 update_engine[1476]: I20241216 09:43:27.163079 1476 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 09:43:27.163189 update_engine[1476]: I20241216 09:43:27.163089 1476 update_attempter.cc:306] Processing Done. Dec 16 09:43:27.166571 update_engine[1476]: E20241216 09:43:27.166378 1476 update_attempter.cc:619] Update failed. Dec 16 09:43:27.166571 update_engine[1476]: I20241216 09:43:27.166442 1476 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 09:43:27.166571 update_engine[1476]: I20241216 09:43:27.166458 1476 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 09:43:27.166571 update_engine[1476]: I20241216 09:43:27.166473 1476 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 09:43:27.167465 update_engine[1476]: I20241216 09:43:27.166615 1476 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 09:43:27.167465 update_engine[1476]: I20241216 09:43:27.166656 1476 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 09:43:27.167465 update_engine[1476]: I20241216 09:43:27.166671 1476 omaha_request_action.cc:272] Request: Dec 16 09:43:27.167465 update_engine[1476]: Dec 16 09:43:27.167465 update_engine[1476]: Dec 16 09:43:27.167465 update_engine[1476]: Dec 16 09:43:27.167465 update_engine[1476]: Dec 16 09:43:27.167465 update_engine[1476]: Dec 16 09:43:27.167465 update_engine[1476]: Dec 16 09:43:27.167465 update_engine[1476]: I20241216 09:43:27.166687 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:43:27.167465 update_engine[1476]: I20241216 09:43:27.167124 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.167492 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:43:27.168691 update_engine[1476]: E20241216 09:43:27.168278 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.168380 1476 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.168408 1476 omaha_request_action.cc:617] Omaha request response: Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.168428 1476 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.168445 1476 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.168463 1476 update_attempter.cc:306] Processing Done. Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.168483 1476 update_attempter.cc:310] Error event sent. Dec 16 09:43:27.168691 update_engine[1476]: I20241216 09:43:27.168503 1476 update_check_scheduler.cc:74] Next update check in 41m6s Dec 16 09:43:27.169553 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 09:43:27.169553 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 09:43:53.464161 systemd[1]: Started sshd@8-5.75.242.71:22-45.148.10.203:45210.service - OpenSSH per-connection server daemon (45.148.10.203:45210). Dec 16 09:43:53.605353 sshd[6630]: Connection closed by authenticating user root 45.148.10.203 port 45210 [preauth] Dec 16 09:43:53.609234 systemd[1]: sshd@8-5.75.242.71:22-45.148.10.203:45210.service: Deactivated successfully. Dec 16 09:43:57.636453 systemd[1]: Started sshd@9-5.75.242.71:22-45.148.10.203:45898.service - OpenSSH per-connection server daemon (45.148.10.203:45898). Dec 16 09:43:57.738850 sshd[6636]: Connection closed by authenticating user root 45.148.10.203 port 45898 [preauth] Dec 16 09:43:57.744428 systemd[1]: sshd@9-5.75.242.71:22-45.148.10.203:45898.service: Deactivated successfully. Dec 16 09:44:01.774268 systemd[1]: Started sshd@10-5.75.242.71:22-45.148.10.203:45904.service - OpenSSH per-connection server daemon (45.148.10.203:45904). Dec 16 09:44:01.876025 sshd[6641]: Connection closed by authenticating user root 45.148.10.203 port 45904 [preauth] Dec 16 09:44:01.880460 systemd[1]: sshd@10-5.75.242.71:22-45.148.10.203:45904.service: Deactivated successfully. Dec 16 09:44:27.325295 systemd[1]: Started sshd@11-5.75.242.71:22-147.75.109.163:44224.service - OpenSSH per-connection server daemon (147.75.109.163:44224). Dec 16 09:44:28.367344 sshd[6730]: Accepted publickey for core from 147.75.109.163 port 44224 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:28.374235 sshd[6730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:28.395859 systemd-logind[1475]: New session 8 of user core. Dec 16 09:44:28.402219 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 09:44:29.678461 sshd[6730]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:29.686123 systemd[1]: sshd@11-5.75.242.71:22-147.75.109.163:44224.service: Deactivated successfully. Dec 16 09:44:29.690072 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 09:44:29.691052 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Dec 16 09:44:29.694719 systemd-logind[1475]: Removed session 8. Dec 16 09:44:34.864165 systemd[1]: Started sshd@12-5.75.242.71:22-147.75.109.163:44240.service - OpenSSH per-connection server daemon (147.75.109.163:44240). Dec 16 09:44:35.892116 sshd[6745]: Accepted publickey for core from 147.75.109.163 port 44240 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:35.896795 sshd[6745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:35.906714 systemd-logind[1475]: New session 9 of user core. Dec 16 09:44:35.917168 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 09:44:36.737468 sshd[6745]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:36.744011 systemd[1]: sshd@12-5.75.242.71:22-147.75.109.163:44240.service: Deactivated successfully. Dec 16 09:44:36.749462 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 09:44:36.755743 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Dec 16 09:44:36.757847 systemd-logind[1475]: Removed session 9. Dec 16 09:44:41.918170 systemd[1]: Started sshd@13-5.75.242.71:22-147.75.109.163:39896.service - OpenSSH per-connection server daemon (147.75.109.163:39896). Dec 16 09:44:42.939760 sshd[6781]: Accepted publickey for core from 147.75.109.163 port 39896 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:42.942573 sshd[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:42.949901 systemd-logind[1475]: New session 10 of user core. Dec 16 09:44:42.957103 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 09:44:43.746107 sshd[6781]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:43.750724 systemd[1]: sshd@13-5.75.242.71:22-147.75.109.163:39896.service: Deactivated successfully. Dec 16 09:44:43.753422 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 09:44:43.755657 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Dec 16 09:44:43.758045 systemd-logind[1475]: Removed session 10. Dec 16 09:44:43.926371 systemd[1]: Started sshd@14-5.75.242.71:22-147.75.109.163:39912.service - OpenSSH per-connection server daemon (147.75.109.163:39912). Dec 16 09:44:44.949441 sshd[6797]: Accepted publickey for core from 147.75.109.163 port 39912 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:44.952022 sshd[6797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:44.958176 systemd-logind[1475]: New session 11 of user core. Dec 16 09:44:44.965963 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 09:44:45.771542 sshd[6797]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:45.776097 systemd[1]: sshd@14-5.75.242.71:22-147.75.109.163:39912.service: Deactivated successfully. Dec 16 09:44:45.779584 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 09:44:45.782616 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Dec 16 09:44:45.786118 systemd-logind[1475]: Removed session 11. Dec 16 09:44:45.949136 systemd[1]: Started sshd@15-5.75.242.71:22-147.75.109.163:39926.service - OpenSSH per-connection server daemon (147.75.109.163:39926). Dec 16 09:44:46.943067 sshd[6812]: Accepted publickey for core from 147.75.109.163 port 39926 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:46.946899 sshd[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:46.957558 systemd-logind[1475]: New session 12 of user core. Dec 16 09:44:46.966166 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 09:44:47.692668 sshd[6812]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:47.696853 systemd[1]: sshd@15-5.75.242.71:22-147.75.109.163:39926.service: Deactivated successfully. Dec 16 09:44:47.698867 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 09:44:47.699456 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Dec 16 09:44:47.700377 systemd-logind[1475]: Removed session 12. Dec 16 09:44:52.877608 systemd[1]: Started sshd@16-5.75.242.71:22-147.75.109.163:44148.service - OpenSSH per-connection server daemon (147.75.109.163:44148). Dec 16 09:44:53.877305 sshd[6848]: Accepted publickey for core from 147.75.109.163 port 44148 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:53.880478 sshd[6848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:53.888255 systemd-logind[1475]: New session 13 of user core. Dec 16 09:44:53.897977 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 09:44:54.682133 sshd[6848]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:54.688550 systemd[1]: sshd@16-5.75.242.71:22-147.75.109.163:44148.service: Deactivated successfully. Dec 16 09:44:54.693389 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 09:44:54.697200 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Dec 16 09:44:54.699648 systemd-logind[1475]: Removed session 13. Dec 16 09:44:54.865125 systemd[1]: Started sshd@17-5.75.242.71:22-147.75.109.163:44152.service - OpenSSH per-connection server daemon (147.75.109.163:44152). Dec 16 09:44:55.892935 sshd[6860]: Accepted publickey for core from 147.75.109.163 port 44152 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:55.896142 sshd[6860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:55.904065 systemd-logind[1475]: New session 14 of user core. Dec 16 09:44:55.912999 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 09:44:56.883829 sshd[6860]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:56.890440 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Dec 16 09:44:56.890695 systemd[1]: sshd@17-5.75.242.71:22-147.75.109.163:44152.service: Deactivated successfully. Dec 16 09:44:56.893000 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 09:44:56.893976 systemd-logind[1475]: Removed session 14. Dec 16 09:44:57.057820 systemd[1]: Started sshd@18-5.75.242.71:22-147.75.109.163:47056.service - OpenSSH per-connection server daemon (147.75.109.163:47056). Dec 16 09:44:58.072589 sshd[6871]: Accepted publickey for core from 147.75.109.163 port 47056 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:58.076437 sshd[6871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:58.086359 systemd-logind[1475]: New session 15 of user core. Dec 16 09:44:58.091972 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 09:45:00.918374 sshd[6871]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:00.932344 systemd[1]: sshd@18-5.75.242.71:22-147.75.109.163:47056.service: Deactivated successfully. Dec 16 09:45:00.938018 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 09:45:00.940469 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Dec 16 09:45:00.943654 systemd-logind[1475]: Removed session 15. Dec 16 09:45:01.093268 systemd[1]: Started sshd@19-5.75.242.71:22-147.75.109.163:47066.service - OpenSSH per-connection server daemon (147.75.109.163:47066). Dec 16 09:45:02.130849 sshd[6890]: Accepted publickey for core from 147.75.109.163 port 47066 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:02.135320 sshd[6890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:02.147359 systemd-logind[1475]: New session 16 of user core. Dec 16 09:45:02.157061 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 09:45:03.282132 sshd[6890]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:03.287417 systemd[1]: sshd@19-5.75.242.71:22-147.75.109.163:47066.service: Deactivated successfully. Dec 16 09:45:03.290542 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 09:45:03.294705 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Dec 16 09:45:03.296595 systemd-logind[1475]: Removed session 16. Dec 16 09:45:03.464448 systemd[1]: Started sshd@20-5.75.242.71:22-147.75.109.163:47076.service - OpenSSH per-connection server daemon (147.75.109.163:47076). Dec 16 09:45:04.473835 sshd[6901]: Accepted publickey for core from 147.75.109.163 port 47076 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:04.476284 sshd[6901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:04.482508 systemd-logind[1475]: New session 17 of user core. Dec 16 09:45:04.496053 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 09:45:05.258448 sshd[6901]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:05.262624 systemd[1]: sshd@20-5.75.242.71:22-147.75.109.163:47076.service: Deactivated successfully. Dec 16 09:45:05.265136 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 09:45:05.267054 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Dec 16 09:45:05.268386 systemd-logind[1475]: Removed session 17. Dec 16 09:45:10.445044 systemd[1]: Started sshd@21-5.75.242.71:22-147.75.109.163:56386.service - OpenSSH per-connection server daemon (147.75.109.163:56386). Dec 16 09:45:11.495450 sshd[6958]: Accepted publickey for core from 147.75.109.163 port 56386 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:11.499032 sshd[6958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:11.506551 systemd-logind[1475]: New session 18 of user core. Dec 16 09:45:11.517136 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 09:45:12.295905 sshd[6958]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:12.299791 systemd[1]: sshd@21-5.75.242.71:22-147.75.109.163:56386.service: Deactivated successfully. Dec 16 09:45:12.302625 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 09:45:12.304850 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Dec 16 09:45:12.306034 systemd-logind[1475]: Removed session 18. Dec 16 09:45:17.472398 systemd[1]: Started sshd@22-5.75.242.71:22-147.75.109.163:50390.service - OpenSSH per-connection server daemon (147.75.109.163:50390). Dec 16 09:45:18.461405 sshd[6973]: Accepted publickey for core from 147.75.109.163 port 50390 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:18.464844 sshd[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:18.476337 systemd-logind[1475]: New session 19 of user core. Dec 16 09:45:18.486039 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 09:45:19.263790 sshd[6973]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:19.268561 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Dec 16 09:45:19.271465 systemd[1]: sshd@22-5.75.242.71:22-147.75.109.163:50390.service: Deactivated successfully. Dec 16 09:45:19.276129 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 09:45:19.277227 systemd-logind[1475]: Removed session 19. Dec 16 09:45:34.676213 systemd[1]: cri-containerd-887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053.scope: Deactivated successfully. Dec 16 09:45:34.677366 systemd[1]: cri-containerd-887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053.scope: Consumed 6.425s CPU time. Dec 16 09:45:34.703622 systemd[1]: cri-containerd-f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca.scope: Deactivated successfully. Dec 16 09:45:34.704828 systemd[1]: cri-containerd-f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca.scope: Consumed 8.361s CPU time, 18.0M memory peak, 0B memory swap peak. Dec 16 09:45:34.808667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053-rootfs.mount: Deactivated successfully. Dec 16 09:45:34.814083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca-rootfs.mount: Deactivated successfully. Dec 16 09:45:34.842501 containerd[1495]: time="2024-12-16T09:45:34.823821034Z" level=info msg="shim disconnected" id=887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053 namespace=k8s.io Dec 16 09:45:34.843097 containerd[1495]: time="2024-12-16T09:45:34.817003768Z" level=info msg="shim disconnected" id=f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca namespace=k8s.io Dec 16 09:45:34.847957 containerd[1495]: time="2024-12-16T09:45:34.847906799Z" level=warning msg="cleaning up after shim disconnected" id=f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca namespace=k8s.io Dec 16 09:45:34.847957 containerd[1495]: time="2024-12-16T09:45:34.847947237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:34.849640 containerd[1495]: time="2024-12-16T09:45:34.849567762Z" level=warning msg="cleaning up after shim disconnected" id=887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053 namespace=k8s.io Dec 16 09:45:34.849640 containerd[1495]: time="2024-12-16T09:45:34.849594213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:34.959628 kubelet[2715]: E1216 09:45:34.948801 2715 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52338->10.0.0.2:2379: read: connection timed out" Dec 16 09:45:35.045682 containerd[1495]: time="2024-12-16T09:45:35.045599670Z" level=warning msg="cleanup warnings time=\"2024-12-16T09:45:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 16 09:45:35.843723 kubelet[2715]: I1216 09:45:35.843280 2715 scope.go:117] "RemoveContainer" containerID="f5499dc3a217a3de29dfbf87f1ee00b1b5412c86385543d45adb35c9c8c875ca" Dec 16 09:45:35.882283 kubelet[2715]: I1216 09:45:35.881786 2715 scope.go:117] "RemoveContainer" containerID="887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053" Dec 16 09:45:35.888006 containerd[1495]: time="2024-12-16T09:45:35.887956832Z" level=info msg="CreateContainer within sandbox \"3bcd7816072843d36764698bbd8deb6385e6c8ef420d6bdc67c034a0158de0dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 09:45:35.894792 containerd[1495]: time="2024-12-16T09:45:35.893703104Z" level=info msg="CreateContainer within sandbox \"41742806cb1ef29e93d7c2b533c2ad8ca4030f0505e49358bafe9f371dcd2fc6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 09:45:35.956475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698559244.mount: Deactivated successfully. Dec 16 09:45:35.963745 containerd[1495]: time="2024-12-16T09:45:35.963036719Z" level=info msg="CreateContainer within sandbox \"41742806cb1ef29e93d7c2b533c2ad8ca4030f0505e49358bafe9f371dcd2fc6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e\"" Dec 16 09:45:35.963269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568772956.mount: Deactivated successfully. Dec 16 09:45:35.964775 containerd[1495]: time="2024-12-16T09:45:35.964675437Z" level=info msg="CreateContainer within sandbox \"3bcd7816072843d36764698bbd8deb6385e6c8ef420d6bdc67c034a0158de0dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c359a71778385060b96d7b5cc1584b849141475fe3f0026154d229d3e890def7\"" Dec 16 09:45:35.965044 containerd[1495]: time="2024-12-16T09:45:35.964996659Z" level=info msg="StartContainer for \"748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e\"" Dec 16 09:45:35.971086 containerd[1495]: time="2024-12-16T09:45:35.971041591Z" level=info msg="StartContainer for \"c359a71778385060b96d7b5cc1584b849141475fe3f0026154d229d3e890def7\"" Dec 16 09:45:36.030955 systemd[1]: Started cri-containerd-748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e.scope - libcontainer container 748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e. Dec 16 09:45:36.049111 systemd[1]: Started cri-containerd-c359a71778385060b96d7b5cc1584b849141475fe3f0026154d229d3e890def7.scope - libcontainer container c359a71778385060b96d7b5cc1584b849141475fe3f0026154d229d3e890def7. Dec 16 09:45:36.118573 containerd[1495]: time="2024-12-16T09:45:36.118436604Z" level=info msg="StartContainer for \"748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e\" returns successfully" Dec 16 09:45:36.151826 containerd[1495]: time="2024-12-16T09:45:36.151771881Z" level=info msg="StartContainer for \"c359a71778385060b96d7b5cc1584b849141475fe3f0026154d229d3e890def7\" returns successfully" Dec 16 09:45:37.936284 systemd[1]: cri-containerd-748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e.scope: Deactivated successfully. Dec 16 09:45:37.976040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e-rootfs.mount: Deactivated successfully. Dec 16 09:45:37.985160 containerd[1495]: time="2024-12-16T09:45:37.985064588Z" level=info msg="shim disconnected" id=748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e namespace=k8s.io Dec 16 09:45:37.987567 containerd[1495]: time="2024-12-16T09:45:37.986142688Z" level=warning msg="cleaning up after shim disconnected" id=748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e namespace=k8s.io Dec 16 09:45:37.987567 containerd[1495]: time="2024-12-16T09:45:37.987098589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:38.869162 kubelet[2715]: E1216 09:45:38.831754 2715 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52166->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-4-1bd0c0376a.18119f289efc30cb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-4-1bd0c0376a,UID:20180db9af20cd9bb9cbea26130f55c7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4-1bd0c0376a,},FirstTimestamp:2024-12-16 09:45:28.329187531 +0000 UTC m=+352.330643207,LastTimestamp:2024-12-16 09:45:28.329187531 +0000 UTC m=+352.330643207,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4-1bd0c0376a,}" Dec 16 09:45:38.904957 kubelet[2715]: I1216 09:45:38.904853 2715 scope.go:117] "RemoveContainer" containerID="887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053" Dec 16 09:45:38.906378 kubelet[2715]: I1216 09:45:38.905905 2715 scope.go:117] "RemoveContainer" containerID="748401f6fbc4925d19ac0e90fbfeff50317e321d93b9e5144c48e62b372dfe5e" Dec 16 09:45:38.907713 kubelet[2715]: E1216 09:45:38.907583 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4976dd7-898kn_tigera-operator(f961a2b5-dee8-4f4e-a623-80df9c16db21)\"" pod="tigera-operator/tigera-operator-76c4976dd7-898kn" podUID="f961a2b5-dee8-4f4e-a623-80df9c16db21" Dec 16 09:45:38.960628 containerd[1495]: time="2024-12-16T09:45:38.960547784Z" level=info msg="RemoveContainer for \"887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053\"" Dec 16 09:45:38.981603 containerd[1495]: time="2024-12-16T09:45:38.981381488Z" level=info msg="RemoveContainer for \"887d7ac891664e807dec2c7525fa34570be91e5958ddd03a734315bdea099053\" returns successfully"