Dec 13 02:02:42.123808 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 02:02:42.123857 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:02:42.123873 kernel: BIOS-provided physical RAM map: Dec 13 02:02:42.123884 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:02:42.123895 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:02:42.123906 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:02:42.123918 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 02:02:42.123929 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 02:02:42.123944 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 02:02:42.123955 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 02:02:42.123966 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:02:42.123977 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:02:42.123988 kernel: NX (Execute Disable) protection: active Dec 13 02:02:42.123999 kernel: APIC: Static calls initialized Dec 13 02:02:42.124040 kernel: SMBIOS 2.8 present. Dec 13 02:02:42.124052 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 02:02:42.124064 kernel: Hypervisor detected: KVM Dec 13 02:02:42.124076 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:02:42.124088 kernel: kvm-clock: using sched offset of 3500757594 cycles Dec 13 02:02:42.124101 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:02:42.124113 kernel: tsc: Detected 2495.312 MHz processor Dec 13 02:02:42.124139 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:02:42.124153 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:02:42.124243 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 02:02:42.124262 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 02:02:42.124278 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:02:42.124294 kernel: Using GB pages for direct mapping Dec 13 02:02:42.124306 kernel: ACPI: Early table checksum verification disabled Dec 13 02:02:42.124318 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 02:02:42.124330 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:02:42.124342 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:02:42.124354 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:02:42.124371 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 02:02:42.124383 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:02:42.124395 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:02:42.124406 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:02:42.124418 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:02:42.124430 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 02:02:42.124442 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 02:02:42.124454 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 02:02:42.124476 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 02:02:42.124488 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 02:02:42.124500 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 02:02:42.124513 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 02:02:42.124525 kernel: No NUMA configuration found Dec 13 02:02:42.124537 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 02:02:42.124553 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 02:02:42.124566 kernel: Zone ranges: Dec 13 02:02:42.124578 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:02:42.124591 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 02:02:42.124603 kernel: Normal empty Dec 13 02:02:42.124615 kernel: Movable zone start for each node Dec 13 02:02:42.124628 kernel: Early memory node ranges Dec 13 02:02:42.124640 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:02:42.124652 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 02:02:42.124668 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 02:02:42.124681 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:02:42.124693 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:02:42.124705 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 02:02:42.124718 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:02:42.124731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:02:42.124764 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:02:42.124782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:02:42.124799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:02:42.124815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:02:42.124836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:02:42.124848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:02:42.124861 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:02:42.124873 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:02:42.124885 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:02:42.124899 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 02:02:42.124916 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 02:02:42.124933 kernel: Booting paravirtualized kernel on KVM Dec 13 02:02:42.124951 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:02:42.124969 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:02:42.124981 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 02:02:42.124994 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 02:02:42.125006 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:02:42.125046 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 02:02:42.125061 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:02:42.125075 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:02:42.125087 kernel: random: crng init done Dec 13 02:02:42.125104 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:02:42.125117 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:02:42.125129 kernel: Fallback order for Node 0: 0 Dec 13 02:02:42.125141 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 02:02:42.125153 kernel: Policy zone: DMA32 Dec 13 02:02:42.125165 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:02:42.125183 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 02:02:42.125200 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:02:42.125213 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 02:02:42.125230 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 02:02:42.125242 kernel: Dynamic Preempt: voluntary Dec 13 02:02:42.125255 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:02:42.125273 kernel: rcu: RCU event tracing is enabled. Dec 13 02:02:42.125291 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:02:42.125308 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:02:42.125322 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:02:42.125334 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:02:42.125347 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:02:42.125364 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:02:42.125376 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:02:42.125389 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:02:42.125401 kernel: Console: colour VGA+ 80x25 Dec 13 02:02:42.125413 kernel: printk: console [tty0] enabled Dec 13 02:02:42.125425 kernel: printk: console [ttyS0] enabled Dec 13 02:02:42.125437 kernel: ACPI: Core revision 20230628 Dec 13 02:02:42.125450 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 02:02:42.125463 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:02:42.125479 kernel: x2apic enabled Dec 13 02:02:42.125491 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 02:02:42.125503 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:02:42.125516 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:02:42.125528 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Dec 13 02:02:42.125541 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 02:02:42.125553 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 02:02:42.125565 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 02:02:42.125578 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:02:42.125607 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:02:42.125620 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:02:42.125633 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:02:42.125649 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 02:02:42.125662 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 02:02:42.125676 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:02:42.125689 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 02:02:42.125703 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 02:02:42.125717 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 02:02:42.125730 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 02:02:42.125769 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:02:42.125787 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:02:42.125800 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:02:42.125813 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:02:42.125827 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 02:02:42.125840 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:02:42.125863 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:02:42.125880 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:02:42.125903 kernel: landlock: Up and running. Dec 13 02:02:42.125981 kernel: SELinux: Initializing. Dec 13 02:02:42.126069 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:02:42.126084 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:02:42.126101 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 02:02:42.126119 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:02:42.126132 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:02:42.126151 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:02:42.126164 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 02:02:42.126177 kernel: ... version: 0 Dec 13 02:02:42.126190 kernel: ... bit width: 48 Dec 13 02:02:42.126208 kernel: ... generic registers: 6 Dec 13 02:02:42.126225 kernel: ... value mask: 0000ffffffffffff Dec 13 02:02:42.126238 kernel: ... max period: 00007fffffffffff Dec 13 02:02:42.126251 kernel: ... fixed-purpose events: 0 Dec 13 02:02:42.126264 kernel: ... event mask: 000000000000003f Dec 13 02:02:42.126298 kernel: signal: max sigframe size: 1776 Dec 13 02:02:42.126333 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:02:42.126347 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:02:42.126360 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:02:42.126373 kernel: smpboot: x86: Booting SMP configuration: Dec 13 02:02:42.126385 kernel: .... node #0, CPUs: #1 Dec 13 02:02:42.126398 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:02:42.126411 kernel: smpboot: Max logical packages: 1 Dec 13 02:02:42.126424 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Dec 13 02:02:42.126441 kernel: devtmpfs: initialized Dec 13 02:02:42.126454 kernel: x86/mm: Memory block size: 128MB Dec 13 02:02:42.126467 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:02:42.126480 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:02:42.126495 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:02:42.126513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:02:42.126529 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:02:42.126542 kernel: audit: type=2000 audit(1734055360.290:1): state=initialized audit_enabled=0 res=1 Dec 13 02:02:42.126555 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:02:42.126572 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:02:42.126584 kernel: cpuidle: using governor menu Dec 13 02:02:42.126597 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:02:42.126610 kernel: dca service started, version 1.12.1 Dec 13 02:02:42.126623 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 02:02:42.126636 kernel: PCI: Using configuration type 1 for base access Dec 13 02:02:42.126649 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:02:42.126662 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:02:42.126676 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 02:02:42.126692 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:02:42.126705 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:02:42.126718 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:02:42.126731 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:02:42.126743 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:02:42.126778 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:02:42.126796 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:02:42.126809 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 02:02:42.126821 kernel: ACPI: Interpreter enabled Dec 13 02:02:42.126839 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:02:42.126852 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:02:42.126870 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:02:42.126888 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 02:02:42.126905 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 02:02:42.126918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:02:42.128193 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:02:42.128424 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 02:02:42.128642 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 02:02:42.128661 kernel: PCI host bridge to bus 0000:00 Dec 13 02:02:42.128889 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:02:42.129124 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:02:42.129315 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:02:42.129501 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 02:02:42.129687 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 02:02:42.129897 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 02:02:42.130127 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:02:42.130357 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 02:02:42.130627 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 02:02:42.130863 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 02:02:42.131102 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 02:02:42.131321 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 02:02:42.131526 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 02:02:42.131731 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:02:42.132777 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.133103 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 02:02:42.133335 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.133540 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 02:02:42.133785 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.133991 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 02:02:42.134288 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.134513 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 02:02:42.134776 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.135083 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 02:02:42.135309 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.135516 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 02:02:42.135731 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.135985 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 02:02:42.137637 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.139282 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 02:02:42.139529 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:02:42.139776 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 02:02:42.140002 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 02:02:42.141248 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 02:02:42.141397 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 02:02:42.141537 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 02:02:42.141662 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 02:02:42.141814 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 02:02:42.141934 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 02:02:42.144105 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:02:42.144267 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 02:02:42.144430 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 02:02:42.144581 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 02:02:42.144712 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 02:02:42.144852 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 02:02:42.144972 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 02:02:42.145135 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 02:02:42.145284 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 02:02:42.145416 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 02:02:42.145551 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 02:02:42.145672 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 02:02:42.145828 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 02:02:42.145956 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 02:02:42.147504 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 02:02:42.147641 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 02:02:42.147780 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 02:02:42.147917 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 02:02:42.149187 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 02:02:42.149332 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 02:02:42.149455 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 02:02:42.149577 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 02:02:42.149696 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 02:02:42.149850 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 02:02:42.151110 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 02:02:42.151249 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 02:02:42.151371 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 02:02:42.151492 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 02:02:42.151628 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 02:02:42.151776 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 02:02:42.151905 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 02:02:42.153059 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 02:02:42.153193 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 02:02:42.153311 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 02:02:42.153322 kernel: acpiphp: Slot [0] registered Dec 13 02:02:42.153455 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:02:42.153580 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 02:02:42.153726 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 02:02:42.153884 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 02:02:42.155082 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 02:02:42.155229 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 02:02:42.155350 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 02:02:42.155465 kernel: acpiphp: Slot [0-2] registered Dec 13 02:02:42.155585 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 02:02:42.155719 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 02:02:42.155860 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 02:02:42.155871 kernel: acpiphp: Slot [0-3] registered Dec 13 02:02:42.155998 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 02:02:42.156523 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 02:02:42.156643 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 02:02:42.156654 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:02:42.156662 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:02:42.156671 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:02:42.156678 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:02:42.156686 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 02:02:42.156694 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 02:02:42.156706 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 02:02:42.156715 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 02:02:42.156723 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 02:02:42.156730 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 02:02:42.156738 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 02:02:42.156759 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 02:02:42.156767 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 02:02:42.156777 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 02:02:42.156788 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 02:02:42.156802 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 02:02:42.156813 kernel: iommu: Default domain type: Translated Dec 13 02:02:42.156823 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:02:42.156834 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:02:42.156844 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:02:42.156855 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:02:42.156864 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 02:02:42.158062 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 02:02:42.158220 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 02:02:42.158370 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:02:42.158385 kernel: vgaarb: loaded Dec 13 02:02:42.158393 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 02:02:42.158401 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 02:02:42.158409 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:02:42.158418 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:02:42.158428 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:02:42.158439 kernel: pnp: PnP ACPI init Dec 13 02:02:42.158586 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 02:02:42.158605 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:02:42.158616 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:02:42.158627 kernel: NET: Registered PF_INET protocol family Dec 13 02:02:42.158638 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:02:42.158648 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:02:42.158656 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:02:42.158663 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:02:42.158671 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:02:42.158682 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:02:42.158689 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:02:42.158697 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:02:42.158706 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:02:42.158717 kernel: NET: Registered PF_XDP protocol family Dec 13 02:02:42.158911 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 02:02:42.160138 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 02:02:42.160296 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 02:02:42.160433 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 02:02:42.160573 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 02:02:42.160692 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 02:02:42.160836 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 02:02:42.160956 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 02:02:42.161114 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 02:02:42.161243 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 02:02:42.161383 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 02:02:42.161516 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 02:02:42.161641 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 02:02:42.161776 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 02:02:42.161898 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 02:02:42.164111 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 02:02:42.164383 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 02:02:42.164543 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 02:02:42.164694 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 02:02:42.164846 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 02:02:42.164965 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 02:02:42.167119 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 02:02:42.167246 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 02:02:42.167366 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 02:02:42.167483 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 02:02:42.167599 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 02:02:42.167717 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 02:02:42.167881 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 02:02:42.168006 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 02:02:42.168215 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 02:02:42.168336 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 02:02:42.168469 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 02:02:42.168599 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 02:02:42.168728 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 02:02:42.168876 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 02:02:42.171035 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 02:02:42.171176 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:02:42.171287 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:02:42.171399 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:02:42.171506 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 02:02:42.171615 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 02:02:42.171723 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 02:02:42.171898 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 02:02:42.172052 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 02:02:42.172178 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 02:02:42.172302 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 02:02:42.172426 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 02:02:42.172540 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 02:02:42.172670 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 02:02:42.172806 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 02:02:42.172930 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 02:02:42.174109 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 02:02:42.174270 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 02:02:42.174397 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 02:02:42.174524 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 02:02:42.174639 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 02:02:42.174771 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 02:02:42.174924 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 02:02:42.175086 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 02:02:42.175211 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 02:02:42.175365 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 02:02:42.175484 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 02:02:42.175605 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 02:02:42.175617 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 02:02:42.175630 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:02:42.175641 kernel: Initialise system trusted keyrings Dec 13 02:02:42.175652 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:02:42.175661 kernel: Key type asymmetric registered Dec 13 02:02:42.175669 kernel: Asymmetric key parser 'x509' registered Dec 13 02:02:42.175678 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 02:02:42.175687 kernel: io scheduler mq-deadline registered Dec 13 02:02:42.175695 kernel: io scheduler kyber registered Dec 13 02:02:42.175704 kernel: io scheduler bfq registered Dec 13 02:02:42.175864 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 02:02:42.176024 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 02:02:42.176168 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 02:02:42.176290 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 02:02:42.176413 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 02:02:42.176555 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 02:02:42.176685 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 02:02:42.176828 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 02:02:42.176950 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 02:02:42.177137 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 02:02:42.177283 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 02:02:42.177429 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 02:02:42.177552 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 02:02:42.177683 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 02:02:42.177856 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 02:02:42.177978 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 02:02:42.177990 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 02:02:42.178133 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 02:02:42.178253 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 02:02:42.178267 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:02:42.178280 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 02:02:42.178291 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:02:42.178303 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:02:42.178315 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:02:42.178326 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:02:42.178334 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:02:42.178478 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 02:02:42.178492 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:02:42.178603 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 02:02:42.178715 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T02:02:41 UTC (1734055361) Dec 13 02:02:42.178853 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 02:02:42.178869 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 02:02:42.178881 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:02:42.178893 kernel: Segment Routing with IPv6 Dec 13 02:02:42.178906 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:02:42.178916 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:02:42.178927 kernel: Key type dns_resolver registered Dec 13 02:02:42.178936 kernel: IPI shorthand broadcast: enabled Dec 13 02:02:42.178944 kernel: sched_clock: Marking stable (1468014822, 154140976)->(1713075789, -90919991) Dec 13 02:02:42.178952 kernel: registered taskstats version 1 Dec 13 02:02:42.178960 kernel: Loading compiled-in X.509 certificates Dec 13 02:02:42.178968 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 02:02:42.178976 kernel: Key type .fscrypt registered Dec 13 02:02:42.178990 kernel: Key type fscrypt-provisioning registered Dec 13 02:02:42.179001 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:02:42.179060 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:02:42.179069 kernel: ima: No architecture policies found Dec 13 02:02:42.179077 kernel: clk: Disabling unused clocks Dec 13 02:02:42.179085 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 02:02:42.179093 kernel: Write protecting the kernel read-only data: 36864k Dec 13 02:02:42.179102 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 02:02:42.179113 kernel: Run /init as init process Dec 13 02:02:42.179121 kernel: with arguments: Dec 13 02:02:42.179131 kernel: /init Dec 13 02:02:42.179139 kernel: with environment: Dec 13 02:02:42.179147 kernel: HOME=/ Dec 13 02:02:42.179155 kernel: TERM=linux Dec 13 02:02:42.179163 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:02:42.179173 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:02:42.179184 systemd[1]: Detected virtualization kvm. Dec 13 02:02:42.179195 systemd[1]: Detected architecture x86-64. Dec 13 02:02:42.179203 systemd[1]: Running in initrd. Dec 13 02:02:42.179211 systemd[1]: No hostname configured, using default hostname. Dec 13 02:02:42.179219 systemd[1]: Hostname set to . Dec 13 02:02:42.179228 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:02:42.179236 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:02:42.179246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:02:42.179258 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:02:42.179274 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:02:42.179285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:02:42.179296 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:02:42.179305 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:02:42.179315 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:02:42.179324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:02:42.179335 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:02:42.179344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:02:42.179353 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:02:42.179361 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:02:42.179371 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:02:42.179382 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:02:42.179394 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:02:42.179406 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:02:42.179419 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:02:42.179434 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:02:42.179443 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:02:42.179454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:02:42.179464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:02:42.179476 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:02:42.179486 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:02:42.179495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:02:42.179503 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:02:42.179514 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:02:42.179525 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:02:42.179537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:02:42.179546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:02:42.179583 systemd-journald[187]: Collecting audit messages is disabled. Dec 13 02:02:42.179611 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:02:42.179622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:02:42.179633 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:02:42.179644 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:02:42.179657 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:02:42.179667 kernel: Bridge firewalling registered Dec 13 02:02:42.179678 systemd-journald[187]: Journal started Dec 13 02:02:42.179702 systemd-journald[187]: Runtime Journal (/run/log/journal/9795b07b69ef4725a0a039834a3e2638) is 4.8M, max 38.4M, 33.6M free. Dec 13 02:02:42.126345 systemd-modules-load[188]: Inserted module 'overlay' Dec 13 02:02:42.207825 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:02:42.170756 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 13 02:02:42.207685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:02:42.208442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:02:42.209354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:02:42.216181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:02:42.218148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:02:42.227228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:02:42.234233 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:02:42.241623 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:02:42.243062 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:02:42.249466 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:02:42.252180 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:02:42.263388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:02:42.268156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:02:42.273189 dracut-cmdline[220]: dracut-dracut-053 Dec 13 02:02:42.276800 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:02:42.309767 systemd-resolved[223]: Positive Trust Anchors: Dec 13 02:02:42.309785 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:02:42.309816 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:02:42.317091 systemd-resolved[223]: Defaulting to hostname 'linux'. Dec 13 02:02:42.318122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:02:42.318874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:02:42.351070 kernel: SCSI subsystem initialized Dec 13 02:02:42.361071 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:02:42.373055 kernel: iscsi: registered transport (tcp) Dec 13 02:02:42.395131 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:02:42.395226 kernel: QLogic iSCSI HBA Driver Dec 13 02:02:42.451303 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:02:42.459335 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:02:42.488270 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:02:42.488367 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:02:42.488393 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:02:42.545092 kernel: raid6: avx2x4 gen() 26358 MB/s Dec 13 02:02:42.562087 kernel: raid6: avx2x2 gen() 25382 MB/s Dec 13 02:02:42.580265 kernel: raid6: avx2x1 gen() 21407 MB/s Dec 13 02:02:42.580313 kernel: raid6: using algorithm avx2x4 gen() 26358 MB/s Dec 13 02:02:42.599244 kernel: raid6: .... xor() 6331 MB/s, rmw enabled Dec 13 02:02:42.599310 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:02:42.623064 kernel: xor: automatically using best checksumming function avx Dec 13 02:02:42.795075 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:02:42.815149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:02:42.822317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:02:42.835797 systemd-udevd[405]: Using default interface naming scheme 'v255'. Dec 13 02:02:42.840607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:02:42.852241 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:02:42.872445 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Dec 13 02:02:42.914071 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:02:42.921215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:02:43.032242 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:02:43.043377 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:02:43.068477 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:02:43.077406 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:02:43.079153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:02:43.081316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:02:43.090194 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:02:43.119326 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:02:43.145501 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:02:43.156043 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 02:02:43.169062 kernel: ACPI: bus type USB registered Dec 13 02:02:43.177025 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:02:43.180043 kernel: usbcore: registered new interface driver usbfs Dec 13 02:02:43.188082 kernel: usbcore: registered new interface driver hub Dec 13 02:02:43.213036 kernel: usbcore: registered new device driver usb Dec 13 02:02:43.217335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:02:43.218205 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:02:43.219762 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:02:43.221162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:02:43.221837 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:02:43.223000 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:02:43.230372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:02:43.269227 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:02:43.269304 kernel: AES CTR mode by8 optimization enabled Dec 13 02:02:43.273043 kernel: libata version 3.00 loaded. Dec 13 02:02:43.300047 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:02:43.306070 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 02:02:43.306251 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 02:02:43.306425 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 02:02:43.322113 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 02:02:43.322142 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 02:02:43.322320 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 02:02:43.322465 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:02:43.322623 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 02:02:43.322800 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 02:02:43.322954 kernel: hub 1-0:1.0: USB hub found Dec 13 02:02:43.323186 kernel: hub 1-0:1.0: 4 ports detected Dec 13 02:02:43.323390 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 02:02:43.323642 kernel: hub 2-0:1.0: USB hub found Dec 13 02:02:43.323871 kernel: hub 2-0:1.0: 4 ports detected Dec 13 02:02:43.327250 kernel: scsi host1: ahci Dec 13 02:02:43.327473 kernel: scsi host2: ahci Dec 13 02:02:43.327625 kernel: scsi host3: ahci Dec 13 02:02:43.327791 kernel: scsi host4: ahci Dec 13 02:02:43.327949 kernel: scsi host5: ahci Dec 13 02:02:43.330519 kernel: scsi host6: ahci Dec 13 02:02:43.330700 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Dec 13 02:02:43.330716 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Dec 13 02:02:43.330728 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Dec 13 02:02:43.330741 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Dec 13 02:02:43.330768 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Dec 13 02:02:43.330782 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Dec 13 02:02:43.361672 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 02:02:43.382742 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 02:02:43.382973 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 02:02:43.383162 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 02:02:43.383333 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:02:43.383509 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:02:43.383524 kernel: GPT:17805311 != 80003071 Dec 13 02:02:43.383535 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:02:43.383550 kernel: GPT:17805311 != 80003071 Dec 13 02:02:43.383560 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:02:43.383570 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:02:43.383581 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 02:02:43.363190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:02:43.374864 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:02:43.400494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:02:43.548074 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 02:02:43.634054 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 02:02:43.634161 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 02:02:43.641934 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 02:02:43.641985 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 02:02:43.642051 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 02:02:43.645299 kernel: ata1.00: applying bridge limits Dec 13 02:02:43.657921 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 02:02:43.658037 kernel: ata1.00: configured for UDMA/100 Dec 13 02:02:43.661691 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 02:02:43.669081 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 02:02:43.711058 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:02:43.728086 kernel: usbcore: registered new interface driver usbhid Dec 13 02:02:43.728163 kernel: usbhid: USB HID core driver Dec 13 02:02:43.738544 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 02:02:43.738602 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 02:02:43.750467 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 02:02:43.760800 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:02:43.760822 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 02:02:43.789986 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (451) Dec 13 02:02:43.790066 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (457) Dec 13 02:02:43.797206 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 02:02:43.812244 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 02:02:43.827624 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 02:02:43.828423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 02:02:43.835404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:02:43.843216 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:02:43.850387 disk-uuid[574]: Primary Header is updated. Dec 13 02:02:43.850387 disk-uuid[574]: Secondary Entries is updated. Dec 13 02:02:43.850387 disk-uuid[574]: Secondary Header is updated. Dec 13 02:02:43.863050 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:02:43.873088 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:02:43.888055 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:02:44.887528 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:02:44.888981 disk-uuid[575]: The operation has completed successfully. Dec 13 02:02:44.998586 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:02:44.998809 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:02:45.023213 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:02:45.044675 sh[594]: Success Dec 13 02:02:45.074275 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 02:02:45.167473 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:02:45.181197 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:02:45.184241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:02:45.220172 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 02:02:45.220231 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:02:45.223224 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:02:45.226263 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:02:45.228572 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:02:45.242068 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 02:02:45.245379 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:02:45.247498 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:02:45.254374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:02:45.260330 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:02:45.283122 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:02:45.283192 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:02:45.287192 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:02:45.296624 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:02:45.296700 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:02:45.312787 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:02:45.317551 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:02:45.323146 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:02:45.332282 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:02:45.408629 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:02:45.419321 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:02:45.438232 ignition[688]: Ignition 2.19.0 Dec 13 02:02:45.438247 ignition[688]: Stage: fetch-offline Dec 13 02:02:45.441192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:02:45.438292 ignition[688]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:02:45.438302 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:02:45.438428 ignition[688]: parsed url from cmdline: "" Dec 13 02:02:45.438432 ignition[688]: no config URL provided Dec 13 02:02:45.438437 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:02:45.438447 ignition[688]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:02:45.438452 ignition[688]: failed to fetch config: resource requires networking Dec 13 02:02:45.438611 ignition[688]: Ignition finished successfully Dec 13 02:02:45.453556 systemd-networkd[775]: lo: Link UP Dec 13 02:02:45.453567 systemd-networkd[775]: lo: Gained carrier Dec 13 02:02:45.456542 systemd-networkd[775]: Enumeration completed Dec 13 02:02:45.456947 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:02:45.457315 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:45.457320 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:02:45.458938 systemd[1]: Reached target network.target - Network. Dec 13 02:02:45.459504 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:45.459508 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:02:45.462031 systemd-networkd[775]: eth0: Link UP Dec 13 02:02:45.462036 systemd-networkd[775]: eth0: Gained carrier Dec 13 02:02:45.462045 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:45.463833 systemd-networkd[775]: eth1: Link UP Dec 13 02:02:45.463837 systemd-networkd[775]: eth1: Gained carrier Dec 13 02:02:45.463844 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:45.466217 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:02:45.481294 ignition[782]: Ignition 2.19.0 Dec 13 02:02:45.481305 ignition[782]: Stage: fetch Dec 13 02:02:45.481500 ignition[782]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:02:45.481513 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:02:45.481624 ignition[782]: parsed url from cmdline: "" Dec 13 02:02:45.481628 ignition[782]: no config URL provided Dec 13 02:02:45.481633 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:02:45.481644 ignition[782]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:02:45.481662 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 02:02:45.481836 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 02:02:45.524131 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:02:45.535218 systemd-networkd[775]: eth0: DHCPv4 address 49.13.63.199/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:02:45.682715 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 02:02:45.693661 ignition[782]: GET result: OK Dec 13 02:02:45.693827 ignition[782]: parsing config with SHA512: 0f7b3b3b7670c865b03909a23b42ac2b69acf0122462d17b38db0a94421472a1d78fb24c6f4cb1d9783cb20fcbe1dfff17b1437621940c308f8d794e1e11177d Dec 13 02:02:45.704409 unknown[782]: fetched base config from "system" Dec 13 02:02:45.704441 unknown[782]: fetched base config from "system" Dec 13 02:02:45.707632 ignition[782]: fetch: fetch complete Dec 13 02:02:45.704456 unknown[782]: fetched user config from "hetzner" Dec 13 02:02:45.708308 ignition[782]: fetch: fetch passed Dec 13 02:02:45.708407 ignition[782]: Ignition finished successfully Dec 13 02:02:45.715390 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:02:45.728417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:02:45.746934 ignition[790]: Ignition 2.19.0 Dec 13 02:02:45.746946 ignition[790]: Stage: kargs Dec 13 02:02:45.747191 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:02:45.747202 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:02:45.748518 ignition[790]: kargs: kargs passed Dec 13 02:02:45.750255 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:02:45.748575 ignition[790]: Ignition finished successfully Dec 13 02:02:45.762180 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:02:45.779522 ignition[796]: Ignition 2.19.0 Dec 13 02:02:45.779543 ignition[796]: Stage: disks Dec 13 02:02:45.779815 ignition[796]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:02:45.779828 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:02:45.780968 ignition[796]: disks: disks passed Dec 13 02:02:45.783879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:02:45.781043 ignition[796]: Ignition finished successfully Dec 13 02:02:45.786028 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:02:45.787658 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:02:45.788699 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:02:45.790344 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:02:45.791786 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:02:45.798255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:02:45.825912 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:02:45.829648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:02:45.835142 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:02:45.971048 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 02:02:45.972381 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:02:45.973156 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:02:45.980149 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:02:45.988196 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:02:45.995217 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 02:02:45.998210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:02:45.998265 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:02:46.005147 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:02:46.010290 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:02:46.015813 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (813) Dec 13 02:02:46.022035 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:02:46.027742 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:02:46.027819 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:02:46.034376 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:02:46.034405 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:02:46.039410 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:02:46.098441 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:02:46.104934 coreos-metadata[815]: Dec 13 02:02:46.104 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 02:02:46.106514 coreos-metadata[815]: Dec 13 02:02:46.106 INFO Fetch successful Dec 13 02:02:46.107986 coreos-metadata[815]: Dec 13 02:02:46.106 INFO wrote hostname ci-4081-2-1-3-45a43b40ef to /sysroot/etc/hostname Dec 13 02:02:46.109678 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:02:46.111359 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:02:46.113746 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:02:46.118520 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:02:46.209485 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:02:46.215115 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:02:46.218928 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:02:46.227046 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:02:46.227938 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:02:46.251654 ignition[932]: INFO : Ignition 2.19.0 Dec 13 02:02:46.252726 ignition[932]: INFO : Stage: mount Dec 13 02:02:46.254809 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:02:46.254809 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:02:46.256461 ignition[932]: INFO : mount: mount passed Dec 13 02:02:46.256970 ignition[932]: INFO : Ignition finished successfully Dec 13 02:02:46.257582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:02:46.258333 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:02:46.263152 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:02:46.280154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:02:46.296044 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Dec 13 02:02:46.296107 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:02:46.301333 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:02:46.304446 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:02:46.314703 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:02:46.314742 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:02:46.320280 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:02:46.360987 ignition[960]: INFO : Ignition 2.19.0 Dec 13 02:02:46.360987 ignition[960]: INFO : Stage: files Dec 13 02:02:46.363876 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:02:46.363876 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:02:46.363876 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:02:46.368348 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:02:46.368348 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:02:46.371602 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:02:46.371602 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:02:46.371602 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:02:46.369864 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 02:02:46.377707 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:02:46.377707 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:02:46.489255 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:02:46.683755 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:02:46.686441 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:02:46.854362 systemd-networkd[775]: eth1: Gained IPv6LL Dec 13 02:02:47.228973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 02:02:47.303311 systemd-networkd[775]: eth0: Gained IPv6LL Dec 13 02:02:47.615575 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:02:47.615575 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:02:47.619714 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:02:47.619714 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:02:47.619714 ignition[960]: INFO : files: files passed Dec 13 02:02:47.619714 ignition[960]: INFO : Ignition finished successfully Dec 13 02:02:47.621457 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:02:47.632271 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:02:47.647290 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:02:47.653710 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:02:47.653851 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:02:47.664103 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:02:47.665162 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:02:47.665162 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:02:47.667494 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:02:47.668572 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:02:47.674219 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:02:47.704963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:02:47.705166 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:02:47.707437 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:02:47.708235 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:02:47.709741 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:02:47.715190 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:02:47.742747 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:02:47.748339 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:02:47.777789 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:02:47.778962 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:02:47.780886 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:02:47.782601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:02:47.782802 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:02:47.784650 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:02:47.785691 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:02:47.787413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:02:47.788972 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:02:47.790510 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:02:47.792438 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:02:47.794071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:02:47.795857 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:02:47.797588 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:02:47.799287 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:02:47.800855 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:02:47.801093 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:02:47.802897 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:02:47.804145 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:02:47.805566 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:02:47.805739 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:02:47.807463 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:02:47.807617 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:02:47.809888 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:02:47.810083 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:02:47.811270 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:02:47.811582 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:02:47.812896 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:02:47.813076 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:02:47.821451 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:02:47.826338 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:02:47.829230 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:02:47.829572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:02:47.833193 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:02:47.833417 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:02:47.840100 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:02:47.840216 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:02:47.852040 ignition[1012]: INFO : Ignition 2.19.0 Dec 13 02:02:47.852040 ignition[1012]: INFO : Stage: umount Dec 13 02:02:47.852040 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:02:47.852040 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:02:47.863860 ignition[1012]: INFO : umount: umount passed Dec 13 02:02:47.863860 ignition[1012]: INFO : Ignition finished successfully Dec 13 02:02:47.858197 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:02:47.858313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:02:47.859527 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:02:47.859959 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:02:47.860684 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:02:47.860734 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:02:47.861249 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:02:47.861295 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:02:47.861808 systemd[1]: Stopped target network.target - Network. Dec 13 02:02:47.864405 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:02:47.864459 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:02:47.865256 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:02:47.865676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:02:47.871316 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:02:47.872149 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:02:47.873250 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:02:47.874449 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:02:47.874498 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:02:47.875421 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:02:47.875472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:02:47.876674 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:02:47.876729 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:02:47.877924 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:02:47.877973 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:02:47.879145 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:02:47.880398 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:02:47.882533 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:02:47.883333 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:02:47.883441 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:02:47.884121 systemd-networkd[775]: eth0: DHCPv6 lease lost Dec 13 02:02:47.885198 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:02:47.885287 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:02:47.887140 systemd-networkd[775]: eth1: DHCPv6 lease lost Dec 13 02:02:47.889473 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:02:47.889622 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:02:47.890555 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:02:47.890676 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:02:47.893855 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:02:47.893910 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:02:47.900145 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:02:47.900692 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:02:47.900757 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:02:47.901376 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:02:47.901426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:02:47.903415 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:02:47.903468 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:02:47.904054 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:02:47.904105 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:02:47.905409 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:02:47.923541 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:02:47.924196 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:02:47.925617 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:02:47.925758 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:02:47.927452 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:02:47.927518 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:02:47.928714 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:02:47.928755 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:02:47.929908 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:02:47.929970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:02:47.931678 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:02:47.931728 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:02:47.932845 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:02:47.932923 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:02:47.942205 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:02:47.944312 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:02:47.944383 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:02:47.944920 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 02:02:47.944965 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:02:47.945585 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:02:47.945648 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:02:47.949253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:02:47.949311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:02:47.951153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:02:47.951423 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:02:47.952879 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:02:47.958204 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:02:47.967487 systemd[1]: Switching root. Dec 13 02:02:48.005260 systemd-journald[187]: Journal stopped Dec 13 02:02:49.414669 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 13 02:02:49.414749 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:02:49.414795 kernel: SELinux: policy capability open_perms=1 Dec 13 02:02:49.414806 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:02:49.414822 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:02:49.414833 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:02:49.414849 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:02:49.414866 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:02:49.414884 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:02:49.414895 kernel: audit: type=1403 audit(1734055368.301:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:02:49.414910 systemd[1]: Successfully loaded SELinux policy in 77.570ms. Dec 13 02:02:49.414925 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.397ms. Dec 13 02:02:49.414937 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:02:49.414949 systemd[1]: Detected virtualization kvm. Dec 13 02:02:49.414962 systemd[1]: Detected architecture x86-64. Dec 13 02:02:49.414974 systemd[1]: Detected first boot. Dec 13 02:02:49.414986 systemd[1]: Hostname set to . Dec 13 02:02:49.415000 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:02:49.415038 zram_generator::config[1054]: No configuration found. Dec 13 02:02:49.415057 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:02:49.415071 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:02:49.415083 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:02:49.415095 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:02:49.415108 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:02:49.415120 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:02:49.415132 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:02:49.415144 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:02:49.415158 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:02:49.415170 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:02:49.415182 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:02:49.415195 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:02:49.415208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:02:49.415221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:02:49.415236 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:02:49.415250 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:02:49.415264 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:02:49.415284 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:02:49.415301 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 02:02:49.415316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:02:49.415328 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:02:49.415340 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:02:49.415352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:02:49.415367 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:02:49.415379 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:02:49.415391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:02:49.415403 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:02:49.415414 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:02:49.415426 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:02:49.415438 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:02:49.415450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:02:49.415462 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:02:49.415476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:02:49.415489 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:02:49.415503 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:02:49.415517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:02:49.415534 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:02:49.415548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:49.415563 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:02:49.415574 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:02:49.415586 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:02:49.415598 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:02:49.415610 systemd[1]: Reached target machines.target - Containers. Dec 13 02:02:49.415622 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:02:49.415634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:02:49.415645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:02:49.415660 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:02:49.415672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:02:49.415683 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:02:49.415700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:02:49.415712 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:02:49.415724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:02:49.415736 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:02:49.415748 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:02:49.415762 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:02:49.415791 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:02:49.415804 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:02:49.415816 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:02:49.415827 kernel: fuse: init (API version 7.39) Dec 13 02:02:49.415860 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:02:49.415873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:02:49.415885 kernel: ACPI: bus type drm_connector registered Dec 13 02:02:49.415998 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:02:49.416039 kernel: loop: module loaded Dec 13 02:02:49.416051 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:02:49.416063 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:02:49.416075 systemd[1]: Stopped verity-setup.service. Dec 13 02:02:49.416087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:49.416098 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:02:49.416110 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:02:49.416122 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:02:49.416134 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:02:49.416148 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:02:49.416160 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:02:49.416190 systemd-journald[1131]: Collecting audit messages is disabled. Dec 13 02:02:49.416239 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:02:49.416281 systemd-journald[1131]: Journal started Dec 13 02:02:49.416317 systemd-journald[1131]: Runtime Journal (/run/log/journal/9795b07b69ef4725a0a039834a3e2638) is 4.8M, max 38.4M, 33.6M free. Dec 13 02:02:49.058398 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:02:49.086042 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 02:02:49.086740 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:02:49.420065 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:02:49.419875 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:02:49.420562 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:02:49.421444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:02:49.421693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:02:49.422605 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:02:49.422860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:02:49.423715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:02:49.423980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:02:49.424960 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:02:49.425416 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:02:49.426249 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:02:49.426482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:02:49.428046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:02:49.429307 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:02:49.430195 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:02:49.431080 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:02:49.449591 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:02:49.457646 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:02:49.467119 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:02:49.468136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:02:49.468225 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:02:49.469738 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:02:49.475118 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:02:49.478803 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:02:49.481182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:02:49.489789 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:02:49.494000 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:02:49.495087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:02:49.503238 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:02:49.503799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:02:49.509675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:02:49.514915 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:02:49.522253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:02:49.525284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:02:49.526872 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:02:49.528518 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:02:49.538678 systemd-journald[1131]: Time spent on flushing to /var/log/journal/9795b07b69ef4725a0a039834a3e2638 is 81.822ms for 1136 entries. Dec 13 02:02:49.538678 systemd-journald[1131]: System Journal (/var/log/journal/9795b07b69ef4725a0a039834a3e2638) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:02:49.663502 systemd-journald[1131]: Received client request to flush runtime journal. Dec 13 02:02:49.663543 kernel: loop0: detected capacity change from 0 to 8 Dec 13 02:02:49.663559 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:02:49.663575 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 02:02:49.560601 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:02:49.566883 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:02:49.575188 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:02:49.590569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:02:49.605342 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:02:49.652294 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:02:49.658539 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:02:49.666928 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:02:49.673368 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 02:02:49.673939 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 02:02:49.683939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:02:49.696199 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:02:49.697693 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:02:49.699347 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:02:49.725042 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 02:02:49.746122 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:02:49.756967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:02:49.781868 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Dec 13 02:02:49.786033 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 02:02:49.784487 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Dec 13 02:02:49.794179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:02:49.849047 kernel: loop4: detected capacity change from 0 to 8 Dec 13 02:02:49.852145 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 02:02:49.882314 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 02:02:49.906049 kernel: loop7: detected capacity change from 0 to 211296 Dec 13 02:02:49.933803 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 02:02:49.937385 (sd-merge)[1203]: Merged extensions into '/usr'. Dec 13 02:02:49.941871 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:02:49.942167 systemd[1]: Reloading... Dec 13 02:02:50.051154 zram_generator::config[1234]: No configuration found. Dec 13 02:02:50.199672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:02:50.213119 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:02:50.257440 systemd[1]: Reloading finished in 314 ms. Dec 13 02:02:50.310234 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:02:50.311561 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:02:50.322234 systemd[1]: Starting ensure-sysext.service... Dec 13 02:02:50.327969 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:02:50.337611 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:02:50.337625 systemd[1]: Reloading... Dec 13 02:02:50.358849 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:02:50.359251 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:02:50.360707 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:02:50.361135 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Dec 13 02:02:50.361278 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Dec 13 02:02:50.366246 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:02:50.366322 systemd-tmpfiles[1273]: Skipping /boot Dec 13 02:02:50.377420 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:02:50.377499 systemd-tmpfiles[1273]: Skipping /boot Dec 13 02:02:50.446178 zram_generator::config[1309]: No configuration found. Dec 13 02:02:50.553425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:02:50.613727 systemd[1]: Reloading finished in 275 ms. Dec 13 02:02:50.630244 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:02:50.631361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:02:50.649160 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:02:50.653140 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:02:50.657874 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:02:50.666386 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:02:50.679428 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:02:50.687344 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:02:50.707083 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:02:50.713703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:50.714120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:02:50.726393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:02:50.732383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:02:50.742367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:02:50.743442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:02:50.743608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:50.753474 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:50.753965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:02:50.754555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:02:50.755342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:50.764410 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:02:50.768932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:50.769611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:02:50.773302 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:02:50.773998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:02:50.774130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:50.775106 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:02:50.784263 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:02:50.786869 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Dec 13 02:02:50.787562 systemd[1]: Finished ensure-sysext.service. Dec 13 02:02:50.807287 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:02:50.809948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:02:50.810275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:02:50.812617 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:02:50.819995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:02:50.820879 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:02:50.822598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:02:50.823154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:02:50.824436 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:02:50.824647 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:02:50.832758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:02:50.834124 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:02:50.860535 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:02:50.867898 augenrules[1385]: No rules Dec 13 02:02:50.868641 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:02:50.872413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:02:50.873082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:02:50.893211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:02:50.896092 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:02:51.005996 systemd-resolved[1349]: Positive Trust Anchors: Dec 13 02:02:51.008050 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:02:51.008142 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:02:51.008420 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:02:51.009241 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:02:51.010571 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 02:02:51.015252 systemd-networkd[1398]: lo: Link UP Dec 13 02:02:51.015263 systemd-networkd[1398]: lo: Gained carrier Dec 13 02:02:51.016337 systemd-networkd[1398]: Enumeration completed Dec 13 02:02:51.016443 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:02:51.018794 systemd-resolved[1349]: Using system hostname 'ci-4081-2-1-3-45a43b40ef'. Dec 13 02:02:51.019439 systemd-timesyncd[1372]: No network connectivity, watching for changes. Dec 13 02:02:51.022200 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:02:51.022908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:02:51.023898 systemd[1]: Reached target network.target - Network. Dec 13 02:02:51.024399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:02:51.043105 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1399) Dec 13 02:02:51.059999 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:51.060025 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:02:51.063889 systemd-networkd[1398]: eth1: Link UP Dec 13 02:02:51.063903 systemd-networkd[1398]: eth1: Gained carrier Dec 13 02:02:51.063917 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:51.071926 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:51.071939 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:02:51.074041 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1399) Dec 13 02:02:51.076135 systemd-networkd[1398]: eth0: Link UP Dec 13 02:02:51.076147 systemd-networkd[1398]: eth0: Gained carrier Dec 13 02:02:51.076159 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:02:51.126118 systemd-networkd[1398]: eth0: DHCPv4 address 49.13.63.199/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:02:51.128029 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:02:51.134140 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:02:51.134183 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1394) Dec 13 02:02:51.132438 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:02:51.133007 systemd-timesyncd[1372]: Network configuration changed, trying to establish connection. Dec 13 02:02:51.178057 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:02:51.183447 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 02:02:51.183910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:51.184110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:02:51.192956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:02:51.200600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:02:51.206425 systemd-timesyncd[1372]: Contacted time server 129.70.132.35:123 (0.flatcar.pool.ntp.org). Dec 13 02:02:51.206586 systemd-timesyncd[1372]: Initial clock synchronization to Fri 2024-12-13 02:02:51.438330 UTC. Dec 13 02:02:51.210034 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:02:51.215164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:02:51.215838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:02:51.215871 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:02:51.215885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:02:51.224353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:02:51.224542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:02:51.227812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:02:51.228582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:02:51.229964 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:02:51.231204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:02:51.231903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:02:51.231942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:02:51.243335 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 02:02:51.245832 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 02:02:51.246141 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 02:02:51.265898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:02:51.270120 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:02:51.273199 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:02:51.290222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:02:51.296302 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 02:02:51.296346 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 02:02:51.301065 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:02:51.301093 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:02:51.301105 kernel: [drm] features: -context_init Dec 13 02:02:51.303164 kernel: [drm] number of scanouts: 1 Dec 13 02:02:51.303196 kernel: [drm] number of cap sets: 0 Dec 13 02:02:51.307152 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 02:02:51.316038 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 02:02:51.316096 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 02:02:51.319397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:02:51.319651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:02:51.328039 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:02:51.332255 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:02:51.340322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:02:51.421415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:02:51.458635 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:02:51.466350 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:02:51.499884 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:02:51.552038 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:02:51.553571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:02:51.553739 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:02:51.554090 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:02:51.554279 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:02:51.554739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:02:51.555970 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:02:51.556528 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:02:51.556737 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:02:51.556842 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:02:51.557006 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:02:51.560733 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:02:51.565073 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:02:51.574380 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:02:51.578753 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:02:51.579638 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:02:51.579866 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:02:51.579956 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:02:51.585304 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:02:51.585344 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:02:51.591236 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:02:51.602714 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:02:51.607310 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:02:51.623330 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:02:51.638188 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:02:51.652188 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:02:51.653032 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:02:51.662249 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:02:51.668288 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 02:02:51.678227 coreos-metadata[1459]: Dec 13 02:02:51.678 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 02:02:51.684127 coreos-metadata[1459]: Dec 13 02:02:51.681 INFO Fetch successful Dec 13 02:02:51.684127 coreos-metadata[1459]: Dec 13 02:02:51.683 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 02:02:51.684054 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 02:02:51.684278 jq[1461]: false Dec 13 02:02:51.685870 coreos-metadata[1459]: Dec 13 02:02:51.685 INFO Fetch successful Dec 13 02:02:51.695229 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:02:51.703150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:02:51.706828 extend-filesystems[1464]: Found loop4 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found loop5 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found loop6 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found loop7 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda1 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda2 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda3 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found usr Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda4 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda6 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda7 Dec 13 02:02:51.706828 extend-filesystems[1464]: Found sda9 Dec 13 02:02:51.706828 extend-filesystems[1464]: Checking size of /dev/sda9 Dec 13 02:02:51.808798 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 02:02:51.750548 dbus-daemon[1460]: [system] SELinux support is enabled Dec 13 02:02:51.809189 extend-filesystems[1464]: Resized partition /dev/sda9 Dec 13 02:02:51.714985 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:02:51.813765 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:02:51.738888 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:02:51.742726 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:02:51.756402 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:02:51.769123 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:02:51.786100 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:02:51.821078 jq[1487]: true Dec 13 02:02:51.795454 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:02:51.821534 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:02:51.821838 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:02:51.822332 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:02:51.822538 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:02:51.833509 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:02:51.833772 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:02:51.861075 update_engine[1480]: I20241213 02:02:51.858695 1480 main.cc:92] Flatcar Update Engine starting Dec 13 02:02:51.873671 update_engine[1480]: I20241213 02:02:51.873608 1480 update_check_scheduler.cc:74] Next update check in 11m15s Dec 13 02:02:51.887694 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:02:51.887728 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:02:51.889799 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:02:51.889832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:02:51.892986 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:02:51.905429 jq[1494]: true Dec 13 02:02:51.907324 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:02:51.908278 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:02:51.935071 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1407) Dec 13 02:02:51.952834 tar[1493]: linux-amd64/helm Dec 13 02:02:51.960342 systemd-logind[1472]: New seat seat0. Dec 13 02:02:51.964425 systemd-logind[1472]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 02:02:51.964447 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:02:51.964674 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:02:51.987546 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 02:02:52.026862 extend-filesystems[1482]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:02:52.026862 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 02:02:52.026862 extend-filesystems[1482]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 02:02:52.038586 extend-filesystems[1464]: Resized filesystem in /dev/sda9 Dec 13 02:02:52.038586 extend-filesystems[1464]: Found sr0 Dec 13 02:02:52.030013 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:02:52.031999 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:02:52.067058 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:02:52.074507 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:02:52.081476 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:02:52.087694 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:02:52.102355 systemd[1]: Starting sshkeys.service... Dec 13 02:02:52.128515 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:02:52.142329 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:02:52.224433 coreos-metadata[1535]: Dec 13 02:02:52.220 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 02:02:52.231057 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:02:52.231293 coreos-metadata[1535]: Dec 13 02:02:52.230 INFO Fetch successful Dec 13 02:02:52.233960 unknown[1535]: wrote ssh authorized keys file for user: core Dec 13 02:02:52.285002 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:02:52.287397 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:02:52.288542 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:02:52.300463 systemd[1]: Finished sshkeys.service. Dec 13 02:02:52.310114 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:02:52.330315 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:02:52.357198 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:02:52.357454 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:02:52.368885 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:02:52.382738 containerd[1497]: time="2024-12-13T02:02:52.382670695Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:02:52.394265 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:02:52.405549 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:02:52.417319 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 02:02:52.420701 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:02:52.434993 containerd[1497]: time="2024-12-13T02:02:52.434952648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:02:52.437185 containerd[1497]: time="2024-12-13T02:02:52.437158890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:02:52.437260 containerd[1497]: time="2024-12-13T02:02:52.437246665Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:02:52.437309 containerd[1497]: time="2024-12-13T02:02:52.437298573Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:02:52.437555 containerd[1497]: time="2024-12-13T02:02:52.437538701Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:02:52.437629 containerd[1497]: time="2024-12-13T02:02:52.437616084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:02:52.437761 containerd[1497]: time="2024-12-13T02:02:52.437744221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:02:52.437810 containerd[1497]: time="2024-12-13T02:02:52.437798780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438062 containerd[1497]: time="2024-12-13T02:02:52.438025866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438114 containerd[1497]: time="2024-12-13T02:02:52.438102250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438188 containerd[1497]: time="2024-12-13T02:02:52.438174406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438231 containerd[1497]: time="2024-12-13T02:02:52.438220438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438360 containerd[1497]: time="2024-12-13T02:02:52.438346203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438643 containerd[1497]: time="2024-12-13T02:02:52.438625394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438866 containerd[1497]: time="2024-12-13T02:02:52.438849275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:02:52.438914 containerd[1497]: time="2024-12-13T02:02:52.438902833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:02:52.439079 containerd[1497]: time="2024-12-13T02:02:52.439035815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:02:52.439197 containerd[1497]: time="2024-12-13T02:02:52.439182664Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:02:52.445966 containerd[1497]: time="2024-12-13T02:02:52.445948103Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:02:52.446092 containerd[1497]: time="2024-12-13T02:02:52.446079096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:02:52.446162 containerd[1497]: time="2024-12-13T02:02:52.446149334Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:02:52.446212 containerd[1497]: time="2024-12-13T02:02:52.446200902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:02:52.446286 containerd[1497]: time="2024-12-13T02:02:52.446274223Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:02:52.446474 containerd[1497]: time="2024-12-13T02:02:52.446457732Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:02:52.446793 containerd[1497]: time="2024-12-13T02:02:52.446779028Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:02:52.446969 containerd[1497]: time="2024-12-13T02:02:52.446953527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:02:52.447055 containerd[1497]: time="2024-12-13T02:02:52.447037517Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:02:52.447106 containerd[1497]: time="2024-12-13T02:02:52.447094746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:02:52.447155 containerd[1497]: time="2024-12-13T02:02:52.447143210Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447202 containerd[1497]: time="2024-12-13T02:02:52.447191500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447249 containerd[1497]: time="2024-12-13T02:02:52.447237892Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447295 containerd[1497]: time="2024-12-13T02:02:52.447284904Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447356 containerd[1497]: time="2024-12-13T02:02:52.447344761Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447429 containerd[1497]: time="2024-12-13T02:02:52.447416577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447479 containerd[1497]: time="2024-12-13T02:02:52.447468093Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447534 containerd[1497]: time="2024-12-13T02:02:52.447514559Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:02:52.447591 containerd[1497]: time="2024-12-13T02:02:52.447579766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.447645 containerd[1497]: time="2024-12-13T02:02:52.447633417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.447701 containerd[1497]: time="2024-12-13T02:02:52.447689139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.447756 containerd[1497]: time="2024-12-13T02:02:52.447744460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.447810 containerd[1497]: time="2024-12-13T02:02:52.447793070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.447863 containerd[1497]: time="2024-12-13T02:02:52.447852329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.447922 containerd[1497]: time="2024-12-13T02:02:52.447910391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.447971 containerd[1497]: time="2024-12-13T02:02:52.447959743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448018 containerd[1497]: time="2024-12-13T02:02:52.448007702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448075765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448092395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448106056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448122612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448140603Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448167437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448180460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.448243 containerd[1497]: time="2024-12-13T02:02:52.448192593Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:02:52.448550 containerd[1497]: time="2024-12-13T02:02:52.448453228Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:02:52.449040 containerd[1497]: time="2024-12-13T02:02:52.448477589Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:02:52.449040 containerd[1497]: time="2024-12-13T02:02:52.448603488Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:02:52.449040 containerd[1497]: time="2024-12-13T02:02:52.448619849Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:02:52.449040 containerd[1497]: time="2024-12-13T02:02:52.448630736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.449040 containerd[1497]: time="2024-12-13T02:02:52.448644272Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:02:52.449040 containerd[1497]: time="2024-12-13T02:02:52.448655241Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:02:52.449040 containerd[1497]: time="2024-12-13T02:02:52.448666325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:02:52.449216 containerd[1497]: time="2024-12-13T02:02:52.448964732Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:02:52.449216 containerd[1497]: time="2024-12-13T02:02:52.449015631Z" level=info msg="Connect containerd service" Dec 13 02:02:52.449414 containerd[1497]: time="2024-12-13T02:02:52.449398299Z" level=info msg="using legacy CRI server" Dec 13 02:02:52.449472 containerd[1497]: time="2024-12-13T02:02:52.449446176Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:02:52.449605 containerd[1497]: time="2024-12-13T02:02:52.449591694Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:02:52.450476 containerd[1497]: time="2024-12-13T02:02:52.450283369Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:02:52.450593 containerd[1497]: time="2024-12-13T02:02:52.450563313Z" level=info msg="Start subscribing containerd event" Dec 13 02:02:52.450650 containerd[1497]: time="2024-12-13T02:02:52.450639571Z" level=info msg="Start recovering state" Dec 13 02:02:52.450739 containerd[1497]: time="2024-12-13T02:02:52.450727779Z" level=info msg="Start event monitor" Dec 13 02:02:52.451117 containerd[1497]: time="2024-12-13T02:02:52.451104035Z" level=info msg="Start snapshots syncer" Dec 13 02:02:52.451486 containerd[1497]: time="2024-12-13T02:02:52.451166954Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:02:52.451486 containerd[1497]: time="2024-12-13T02:02:52.451180109Z" level=info msg="Start streaming server" Dec 13 02:02:52.451603 containerd[1497]: time="2024-12-13T02:02:52.451588221Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:02:52.453148 containerd[1497]: time="2024-12-13T02:02:52.453132768Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:02:52.453308 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:02:52.459238 containerd[1497]: time="2024-12-13T02:02:52.458632264Z" level=info msg="containerd successfully booted in 0.077030s" Dec 13 02:02:52.486212 systemd-networkd[1398]: eth1: Gained IPv6LL Dec 13 02:02:52.489655 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:02:52.494393 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:02:52.505703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:02:52.514135 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:02:52.558890 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:02:52.614342 systemd-networkd[1398]: eth0: Gained IPv6LL Dec 13 02:02:52.635152 tar[1493]: linux-amd64/LICENSE Dec 13 02:02:52.635268 tar[1493]: linux-amd64/README.md Dec 13 02:02:52.649808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 02:02:53.785850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:02:53.797914 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:02:53.798471 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:02:53.804403 systemd[1]: Startup finished in 1.693s (kernel) + 6.455s (initrd) + 5.579s (userspace) = 13.728s. Dec 13 02:02:54.853018 kubelet[1590]: E1213 02:02:54.852890 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:02:54.861255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:02:54.861841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:02:54.862778 systemd[1]: kubelet.service: Consumed 1.580s CPU time. Dec 13 02:03:05.111852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:03:05.118547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:03:05.309343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:03:05.313116 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:03:05.405013 kubelet[1609]: E1213 02:03:05.404808 1609 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:03:05.421750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:03:05.422058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:03:15.465825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:03:15.478357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:03:15.704580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:03:15.704693 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:03:15.801064 kubelet[1626]: E1213 02:03:15.800792 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:03:15.807383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:03:15.807734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:03:25.964273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:03:25.971688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:03:26.200576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:03:26.205090 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:03:26.255197 kubelet[1642]: E1213 02:03:26.255052 1642 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:03:26.260702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:03:26.260980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:03:36.464410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:03:36.471264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:03:36.707268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:03:36.720388 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:03:36.778285 kubelet[1659]: E1213 02:03:36.778190 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:03:36.785322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:03:36.785725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:03:37.086723 update_engine[1480]: I20241213 02:03:37.086452 1480 update_attempter.cc:509] Updating boot flags... Dec 13 02:03:37.192082 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 13 02:03:37.244172 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 13 02:03:37.294212 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 13 02:03:46.964475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:03:46.971299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:03:47.200882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:03:47.204932 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:03:47.247871 kubelet[1696]: E1213 02:03:47.247692 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:03:47.256889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:03:47.257112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:03:57.464881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 02:03:57.473967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:03:57.710314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:03:57.711696 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:03:57.764513 kubelet[1713]: E1213 02:03:57.764266 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:03:57.772531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:03:57.772750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:07.965182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 02:04:07.972459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:04:08.235331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:04:08.246528 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:04:08.327675 kubelet[1730]: E1213 02:04:08.327539 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:08.336903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:08.337285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:18.464537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 02:04:18.471301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:04:18.650224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:04:18.655380 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:04:18.704149 kubelet[1746]: E1213 02:04:18.704066 1746 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:18.709087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:18.709332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:28.714665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 02:04:28.727403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:04:28.935330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:04:28.940430 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:04:28.988632 kubelet[1761]: E1213 02:04:28.988510 1761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:28.993739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:28.993978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:39.215514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 02:04:39.223473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:04:39.467129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:04:39.480360 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:04:39.560671 kubelet[1777]: E1213 02:04:39.560549 1777 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:39.568997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:39.569511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:48.143444 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:04:48.151167 systemd[1]: Started sshd@0-49.13.63.199:22-147.75.109.163:39362.service - OpenSSH per-connection server daemon (147.75.109.163:39362). Dec 13 02:04:49.160821 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 39362 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:04:49.166373 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:04:49.184652 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:04:49.192530 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:04:49.197077 systemd-logind[1472]: New session 1 of user core. Dec 13 02:04:49.236214 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:04:49.248632 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:04:49.267610 (systemd)[1790]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:49.449459 systemd[1790]: Queued start job for default target default.target. Dec 13 02:04:49.458476 systemd[1790]: Created slice app.slice - User Application Slice. Dec 13 02:04:49.458504 systemd[1790]: Reached target paths.target - Paths. Dec 13 02:04:49.458517 systemd[1790]: Reached target timers.target - Timers. Dec 13 02:04:49.460214 systemd[1790]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:04:49.494084 systemd[1790]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:04:49.494410 systemd[1790]: Reached target sockets.target - Sockets. Dec 13 02:04:49.494440 systemd[1790]: Reached target basic.target - Basic System. Dec 13 02:04:49.494511 systemd[1790]: Reached target default.target - Main User Target. Dec 13 02:04:49.494576 systemd[1790]: Startup finished in 211ms. Dec 13 02:04:49.496338 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:04:49.505464 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:04:49.714352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 02:04:49.722426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:04:49.939254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:04:49.940681 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:04:49.988334 kubelet[1808]: E1213 02:04:49.988156 1808 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:49.995070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:49.995285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:50.206609 systemd[1]: Started sshd@1-49.13.63.199:22-147.75.109.163:39366.service - OpenSSH per-connection server daemon (147.75.109.163:39366). Dec 13 02:04:51.215633 sshd[1817]: Accepted publickey for core from 147.75.109.163 port 39366 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:04:51.218731 sshd[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:04:51.228260 systemd-logind[1472]: New session 2 of user core. Dec 13 02:04:51.234330 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:04:51.912790 sshd[1817]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:51.921711 systemd[1]: sshd@1-49.13.63.199:22-147.75.109.163:39366.service: Deactivated successfully. Dec 13 02:04:51.927100 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:04:51.928816 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:04:51.931469 systemd-logind[1472]: Removed session 2. Dec 13 02:04:52.090512 systemd[1]: Started sshd@2-49.13.63.199:22-147.75.109.163:39378.service - OpenSSH per-connection server daemon (147.75.109.163:39378). Dec 13 02:04:53.096681 sshd[1824]: Accepted publickey for core from 147.75.109.163 port 39378 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:04:53.100626 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:04:53.110807 systemd-logind[1472]: New session 3 of user core. Dec 13 02:04:53.126636 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:04:53.782148 sshd[1824]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:53.790786 systemd[1]: sshd@2-49.13.63.199:22-147.75.109.163:39378.service: Deactivated successfully. Dec 13 02:04:53.795913 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:04:53.797556 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:04:53.799831 systemd-logind[1472]: Removed session 3. Dec 13 02:04:53.960710 systemd[1]: Started sshd@3-49.13.63.199:22-147.75.109.163:39384.service - OpenSSH per-connection server daemon (147.75.109.163:39384). Dec 13 02:04:54.971105 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 39384 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:04:54.975403 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:04:54.984910 systemd-logind[1472]: New session 4 of user core. Dec 13 02:04:54.992421 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:04:55.659895 sshd[1831]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:55.665667 systemd[1]: sshd@3-49.13.63.199:22-147.75.109.163:39384.service: Deactivated successfully. Dec 13 02:04:55.669499 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:04:55.671632 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:04:55.673836 systemd-logind[1472]: Removed session 4. Dec 13 02:04:55.843454 systemd[1]: Started sshd@4-49.13.63.199:22-147.75.109.163:39396.service - OpenSSH per-connection server daemon (147.75.109.163:39396). Dec 13 02:04:56.863349 sshd[1838]: Accepted publickey for core from 147.75.109.163 port 39396 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:04:56.867485 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:04:56.877876 systemd-logind[1472]: New session 5 of user core. Dec 13 02:04:56.890389 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:04:57.415900 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:04:57.416298 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:04:57.435701 sudo[1841]: pam_unix(sudo:session): session closed for user root Dec 13 02:04:57.598220 sshd[1838]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:57.605060 systemd[1]: sshd@4-49.13.63.199:22-147.75.109.163:39396.service: Deactivated successfully. Dec 13 02:04:57.608985 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:04:57.610547 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:04:57.612821 systemd-logind[1472]: Removed session 5. Dec 13 02:04:57.773678 systemd[1]: Started sshd@5-49.13.63.199:22-147.75.109.163:38172.service - OpenSSH per-connection server daemon (147.75.109.163:38172). Dec 13 02:04:58.747520 sshd[1846]: Accepted publickey for core from 147.75.109.163 port 38172 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:04:58.750897 sshd[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:04:58.760570 systemd-logind[1472]: New session 6 of user core. Dec 13 02:04:58.769306 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:04:59.269760 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:04:59.270229 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:04:59.274409 sudo[1850]: pam_unix(sudo:session): session closed for user root Dec 13 02:04:59.281339 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:04:59.281711 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:04:59.297349 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:04:59.301108 auditctl[1853]: No rules Dec 13 02:04:59.301849 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:04:59.302162 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:04:59.308719 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:04:59.368697 augenrules[1871]: No rules Dec 13 02:04:59.371797 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:04:59.373706 sudo[1849]: pam_unix(sudo:session): session closed for user root Dec 13 02:04:59.532764 sshd[1846]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:59.541867 systemd[1]: sshd@5-49.13.63.199:22-147.75.109.163:38172.service: Deactivated successfully. Dec 13 02:04:59.546080 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:04:59.547258 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:04:59.549147 systemd-logind[1472]: Removed session 6. Dec 13 02:04:59.709702 systemd[1]: Started sshd@6-49.13.63.199:22-147.75.109.163:38186.service - OpenSSH per-connection server daemon (147.75.109.163:38186). Dec 13 02:05:00.214468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 02:05:00.221411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:05:00.455315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:00.456650 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:05:00.517778 kubelet[1889]: E1213 02:05:00.517536 1889 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:05:00.522153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:05:00.522639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:05:00.714806 sshd[1879]: Accepted publickey for core from 147.75.109.163 port 38186 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:05:00.718490 sshd[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:05:00.729090 systemd-logind[1472]: New session 7 of user core. Dec 13 02:05:00.741436 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:05:01.246210 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:05:01.247182 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:05:01.747536 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 02:05:01.750046 (dockerd)[1914]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 02:05:02.280694 dockerd[1914]: time="2024-12-13T02:05:02.280393156Z" level=info msg="Starting up" Dec 13 02:05:02.510633 dockerd[1914]: time="2024-12-13T02:05:02.510315903Z" level=info msg="Loading containers: start." Dec 13 02:05:02.717104 kernel: Initializing XFRM netlink socket Dec 13 02:05:02.830344 systemd-networkd[1398]: docker0: Link UP Dec 13 02:05:02.856697 dockerd[1914]: time="2024-12-13T02:05:02.856622390Z" level=info msg="Loading containers: done." Dec 13 02:05:02.885350 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4146822437-merged.mount: Deactivated successfully. Dec 13 02:05:02.890921 dockerd[1914]: time="2024-12-13T02:05:02.890816101Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:05:02.891333 dockerd[1914]: time="2024-12-13T02:05:02.891007418Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 02:05:02.891548 dockerd[1914]: time="2024-12-13T02:05:02.891505709Z" level=info msg="Daemon has completed initialization" Dec 13 02:05:02.952322 dockerd[1914]: time="2024-12-13T02:05:02.952200854Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:05:02.952658 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 02:05:04.585782 containerd[1497]: time="2024-12-13T02:05:04.585728767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:05:05.345264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913293245.mount: Deactivated successfully. Dec 13 02:05:08.006481 containerd[1497]: time="2024-12-13T02:05:08.006394310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:08.007665 containerd[1497]: time="2024-12-13T02:05:08.007624595Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139346" Dec 13 02:05:08.008887 containerd[1497]: time="2024-12-13T02:05:08.008849290Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:08.011825 containerd[1497]: time="2024-12-13T02:05:08.011789709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:08.013604 containerd[1497]: time="2024-12-13T02:05:08.013135019Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.427360776s" Dec 13 02:05:08.013604 containerd[1497]: time="2024-12-13T02:05:08.013175526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:05:08.038215 containerd[1497]: time="2024-12-13T02:05:08.037999364Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:05:10.714433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 02:05:10.722380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:05:10.955300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:10.956721 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:05:11.002675 kubelet[2129]: E1213 02:05:11.002243 2129 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:05:11.007658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:05:11.007868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:05:11.550280 containerd[1497]: time="2024-12-13T02:05:11.550199716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:11.554844 containerd[1497]: time="2024-12-13T02:05:11.554563643Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217752" Dec 13 02:05:11.560634 containerd[1497]: time="2024-12-13T02:05:11.560540897Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:11.563942 containerd[1497]: time="2024-12-13T02:05:11.563902834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:11.565274 containerd[1497]: time="2024-12-13T02:05:11.564995784Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.526774696s" Dec 13 02:05:11.565274 containerd[1497]: time="2024-12-13T02:05:11.565057991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:05:11.596001 containerd[1497]: time="2024-12-13T02:05:11.595848698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:05:13.295827 containerd[1497]: time="2024-12-13T02:05:13.295763746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:13.297067 containerd[1497]: time="2024-12-13T02:05:13.297028781Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332842" Dec 13 02:05:13.298093 containerd[1497]: time="2024-12-13T02:05:13.298039500Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:13.301054 containerd[1497]: time="2024-12-13T02:05:13.301002924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:13.302163 containerd[1497]: time="2024-12-13T02:05:13.301986189Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.70608895s" Dec 13 02:05:13.302163 containerd[1497]: time="2024-12-13T02:05:13.302031024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:05:13.327073 containerd[1497]: time="2024-12-13T02:05:13.327005368Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:05:14.450220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754277318.mount: Deactivated successfully. Dec 13 02:05:15.881119 containerd[1497]: time="2024-12-13T02:05:15.881002740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:15.882515 containerd[1497]: time="2024-12-13T02:05:15.882457655Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619984" Dec 13 02:05:15.884715 containerd[1497]: time="2024-12-13T02:05:15.884667017Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:15.887912 containerd[1497]: time="2024-12-13T02:05:15.887866388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:15.889056 containerd[1497]: time="2024-12-13T02:05:15.888988287Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.561925721s" Dec 13 02:05:15.889194 containerd[1497]: time="2024-12-13T02:05:15.889170770Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:05:15.924985 containerd[1497]: time="2024-12-13T02:05:15.924909419Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:05:16.578643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966110880.mount: Deactivated successfully. Dec 13 02:05:17.697239 containerd[1497]: time="2024-12-13T02:05:17.697119051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:17.698804 containerd[1497]: time="2024-12-13T02:05:17.698734860Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 02:05:17.699888 containerd[1497]: time="2024-12-13T02:05:17.699839697Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:17.704375 containerd[1497]: time="2024-12-13T02:05:17.704318901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:17.705803 containerd[1497]: time="2024-12-13T02:05:17.705390687Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.780427988s" Dec 13 02:05:17.705803 containerd[1497]: time="2024-12-13T02:05:17.705417547Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:05:17.730415 containerd[1497]: time="2024-12-13T02:05:17.730379319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:05:18.282773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437135514.mount: Deactivated successfully. Dec 13 02:05:18.297385 containerd[1497]: time="2024-12-13T02:05:18.297253572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:18.298871 containerd[1497]: time="2024-12-13T02:05:18.298782217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 13 02:05:18.300533 containerd[1497]: time="2024-12-13T02:05:18.300417052Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:18.306486 containerd[1497]: time="2024-12-13T02:05:18.306364401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:18.309208 containerd[1497]: time="2024-12-13T02:05:18.308820071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 578.236107ms" Dec 13 02:05:18.309208 containerd[1497]: time="2024-12-13T02:05:18.308910151Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:05:18.362621 containerd[1497]: time="2024-12-13T02:05:18.362543566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:05:18.945988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790554305.mount: Deactivated successfully. Dec 13 02:05:21.213672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Dec 13 02:05:21.219342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:05:21.400311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:21.412061 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:05:21.492384 kubelet[2276]: E1213 02:05:21.491606 2276 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:05:21.495701 containerd[1497]: time="2024-12-13T02:05:21.494471333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:21.497160 containerd[1497]: time="2024-12-13T02:05:21.497122565Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Dec 13 02:05:21.497734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:05:21.497988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:05:21.498359 containerd[1497]: time="2024-12-13T02:05:21.498295715Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:21.501969 containerd[1497]: time="2024-12-13T02:05:21.501939807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:21.503177 containerd[1497]: time="2024-12-13T02:05:21.503135789Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.140526298s" Dec 13 02:05:21.503373 containerd[1497]: time="2024-12-13T02:05:21.503256325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:05:24.334577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:24.344343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:05:24.380844 systemd[1]: Reloading requested from client PID 2347 ('systemctl') (unit session-7.scope)... Dec 13 02:05:24.380857 systemd[1]: Reloading... Dec 13 02:05:24.549046 zram_generator::config[2390]: No configuration found. Dec 13 02:05:24.679594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:05:24.772476 systemd[1]: Reloading finished in 390 ms. Dec 13 02:05:24.835151 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:05:24.835276 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:05:24.835704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:24.840603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:05:25.031368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:25.031509 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:05:25.087690 kubelet[2441]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:25.088098 kubelet[2441]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:05:25.088153 kubelet[2441]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:25.088333 kubelet[2441]: I1213 02:05:25.088299 2441 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:05:25.641281 kubelet[2441]: I1213 02:05:25.641177 2441 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:05:25.641281 kubelet[2441]: I1213 02:05:25.641227 2441 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:05:25.641551 kubelet[2441]: I1213 02:05:25.641529 2441 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:05:25.689955 kubelet[2441]: E1213 02:05:25.689902 2441 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.63.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.690431 kubelet[2441]: I1213 02:05:25.689925 2441 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:05:25.712075 kubelet[2441]: I1213 02:05:25.712041 2441 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:05:25.713546 kubelet[2441]: I1213 02:05:25.713307 2441 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:05:25.714762 kubelet[2441]: I1213 02:05:25.714659 2441 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:05:25.715472 kubelet[2441]: I1213 02:05:25.715389 2441 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:05:25.715472 kubelet[2441]: I1213 02:05:25.715429 2441 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:05:25.715596 kubelet[2441]: I1213 02:05:25.715572 2441 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:25.716371 kubelet[2441]: W1213 02:05:25.716320 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.63.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-3-45a43b40ef&limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.716428 kubelet[2441]: E1213 02:05:25.716380 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.63.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-3-45a43b40ef&limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.717550 kubelet[2441]: I1213 02:05:25.717524 2441 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:05:25.717550 kubelet[2441]: I1213 02:05:25.717545 2441 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:05:25.717632 kubelet[2441]: I1213 02:05:25.717579 2441 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:05:25.717632 kubelet[2441]: I1213 02:05:25.717591 2441 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:05:25.719884 kubelet[2441]: W1213 02:05:25.719029 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.63.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.719884 kubelet[2441]: E1213 02:05:25.719068 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.63.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.719884 kubelet[2441]: I1213 02:05:25.719495 2441 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:05:25.724048 kubelet[2441]: I1213 02:05:25.723994 2441 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:05:25.725322 kubelet[2441]: W1213 02:05:25.725294 2441 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:05:25.726432 kubelet[2441]: I1213 02:05:25.726266 2441 server.go:1256] "Started kubelet" Dec 13 02:05:25.726548 kubelet[2441]: I1213 02:05:25.726522 2441 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:05:25.729456 kubelet[2441]: I1213 02:05:25.729003 2441 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:05:25.734566 kubelet[2441]: I1213 02:05:25.733835 2441 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:05:25.734566 kubelet[2441]: I1213 02:05:25.734133 2441 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:05:25.734566 kubelet[2441]: I1213 02:05:25.734151 2441 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:05:25.741454 kubelet[2441]: I1213 02:05:25.741405 2441 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:05:25.742755 kubelet[2441]: I1213 02:05:25.742720 2441 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:05:25.742872 kubelet[2441]: I1213 02:05:25.742838 2441 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:05:25.752417 kubelet[2441]: E1213 02:05:25.752375 2441 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.63.199:6443/api/v1/namespaces/default/events\": dial tcp 49.13.63.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-3-45a43b40ef.18109a502fb70bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-3-45a43b40ef,UID:ci-4081-2-1-3-45a43b40ef,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-3-45a43b40ef,},FirstTimestamp:2024-12-13 02:05:25.726227416 +0000 UTC m=+0.687755819,LastTimestamp:2024-12-13 02:05:25.726227416 +0000 UTC m=+0.687755819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-3-45a43b40ef,}" Dec 13 02:05:25.757961 kubelet[2441]: E1213 02:05:25.757180 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.63.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-3-45a43b40ef?timeout=10s\": dial tcp 49.13.63.199:6443: connect: connection refused" interval="200ms" Dec 13 02:05:25.757961 kubelet[2441]: W1213 02:05:25.757224 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.63.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.757961 kubelet[2441]: E1213 02:05:25.757292 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.63.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.760113 kubelet[2441]: I1213 02:05:25.760089 2441 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:05:25.760712 kubelet[2441]: I1213 02:05:25.760689 2441 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:05:25.764872 kubelet[2441]: I1213 02:05:25.764850 2441 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:05:25.783433 kubelet[2441]: I1213 02:05:25.783375 2441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:05:25.786117 kubelet[2441]: I1213 02:05:25.786083 2441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:05:25.786189 kubelet[2441]: I1213 02:05:25.786146 2441 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:05:25.786218 kubelet[2441]: I1213 02:05:25.786187 2441 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:05:25.786370 kubelet[2441]: E1213 02:05:25.786333 2441 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:05:25.801661 kubelet[2441]: W1213 02:05:25.801483 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.63.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.801661 kubelet[2441]: E1213 02:05:25.801569 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.63.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:25.809362 kubelet[2441]: I1213 02:05:25.809187 2441 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:05:25.809362 kubelet[2441]: I1213 02:05:25.809217 2441 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:05:25.809362 kubelet[2441]: I1213 02:05:25.809235 2441 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:25.813949 kubelet[2441]: I1213 02:05:25.813870 2441 policy_none.go:49] "None policy: Start" Dec 13 02:05:25.814747 kubelet[2441]: I1213 02:05:25.814708 2441 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:05:25.814796 kubelet[2441]: I1213 02:05:25.814760 2441 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:05:25.826724 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:05:25.846408 kubelet[2441]: I1213 02:05:25.845993 2441 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:25.846408 kubelet[2441]: E1213 02:05:25.846386 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.63.199:6443/api/v1/nodes\": dial tcp 49.13.63.199:6443: connect: connection refused" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:25.847574 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:05:25.855276 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:05:25.868895 kubelet[2441]: I1213 02:05:25.868642 2441 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:05:25.869143 kubelet[2441]: I1213 02:05:25.869122 2441 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:05:25.873182 kubelet[2441]: E1213 02:05:25.873057 2441 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-3-45a43b40ef\" not found" Dec 13 02:05:25.887043 kubelet[2441]: I1213 02:05:25.886916 2441 topology_manager.go:215] "Topology Admit Handler" podUID="9a7f7d4c2b72e0a40467ceebfc0774c8" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:25.892144 kubelet[2441]: I1213 02:05:25.891460 2441 topology_manager.go:215] "Topology Admit Handler" podUID="577c84c15fa29de8f004a1f0ec46ca0b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:25.893960 kubelet[2441]: I1213 02:05:25.893780 2441 topology_manager.go:215] "Topology Admit Handler" podUID="1ad593c967432868037640a67945c522" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:25.906790 systemd[1]: Created slice kubepods-burstable-pod9a7f7d4c2b72e0a40467ceebfc0774c8.slice - libcontainer container kubepods-burstable-pod9a7f7d4c2b72e0a40467ceebfc0774c8.slice. Dec 13 02:05:25.925159 systemd[1]: Created slice kubepods-burstable-pod577c84c15fa29de8f004a1f0ec46ca0b.slice - libcontainer container kubepods-burstable-pod577c84c15fa29de8f004a1f0ec46ca0b.slice. Dec 13 02:05:25.932980 systemd[1]: Created slice kubepods-burstable-pod1ad593c967432868037640a67945c522.slice - libcontainer container kubepods-burstable-pod1ad593c967432868037640a67945c522.slice. Dec 13 02:05:25.958980 kubelet[2441]: E1213 02:05:25.958912 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.63.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-3-45a43b40ef?timeout=10s\": dial tcp 49.13.63.199:6443: connect: connection refused" interval="400ms" Dec 13 02:05:26.043882 kubelet[2441]: I1213 02:05:26.043809 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/577c84c15fa29de8f004a1f0ec46ca0b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-3-45a43b40ef\" (UID: \"577c84c15fa29de8f004a1f0ec46ca0b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.043882 kubelet[2441]: I1213 02:05:26.043893 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ad593c967432868037640a67945c522-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-3-45a43b40ef\" (UID: \"1ad593c967432868037640a67945c522\") " pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.044230 kubelet[2441]: I1213 02:05:26.043935 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ad593c967432868037640a67945c522-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-3-45a43b40ef\" (UID: \"1ad593c967432868037640a67945c522\") " pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.044230 kubelet[2441]: I1213 02:05:26.043980 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ad593c967432868037640a67945c522-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-3-45a43b40ef\" (UID: \"1ad593c967432868037640a67945c522\") " pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.044230 kubelet[2441]: I1213 02:05:26.044070 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.044230 kubelet[2441]: I1213 02:05:26.044122 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.044230 kubelet[2441]: I1213 02:05:26.044175 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.044568 kubelet[2441]: I1213 02:05:26.044275 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.044568 kubelet[2441]: I1213 02:05:26.044334 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.050922 kubelet[2441]: I1213 02:05:26.050702 2441 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.051556 kubelet[2441]: E1213 02:05:26.051475 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.63.199:6443/api/v1/nodes\": dial tcp 49.13.63.199:6443: connect: connection refused" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.223921 containerd[1497]: time="2024-12-13T02:05:26.223821045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-3-45a43b40ef,Uid:9a7f7d4c2b72e0a40467ceebfc0774c8,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:26.231705 containerd[1497]: time="2024-12-13T02:05:26.231231420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-3-45a43b40ef,Uid:577c84c15fa29de8f004a1f0ec46ca0b,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:26.238709 containerd[1497]: time="2024-12-13T02:05:26.238614895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-3-45a43b40ef,Uid:1ad593c967432868037640a67945c522,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:26.360432 kubelet[2441]: E1213 02:05:26.360369 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.63.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-3-45a43b40ef?timeout=10s\": dial tcp 49.13.63.199:6443: connect: connection refused" interval="800ms" Dec 13 02:05:26.456172 kubelet[2441]: I1213 02:05:26.455899 2441 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.456842 kubelet[2441]: E1213 02:05:26.456655 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.63.199:6443/api/v1/nodes\": dial tcp 49.13.63.199:6443: connect: connection refused" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:26.531718 kubelet[2441]: W1213 02:05:26.531410 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.63.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-3-45a43b40ef&limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:26.531718 kubelet[2441]: E1213 02:05:26.531519 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.63.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-3-45a43b40ef&limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:26.605147 kubelet[2441]: W1213 02:05:26.604960 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.63.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:26.605147 kubelet[2441]: E1213 02:05:26.605109 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.63.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:26.749435 kubelet[2441]: W1213 02:05:26.749339 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.63.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:26.749435 kubelet[2441]: E1213 02:05:26.749404 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.63.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:26.830498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153851227.mount: Deactivated successfully. Dec 13 02:05:26.850064 containerd[1497]: time="2024-12-13T02:05:26.848410107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:05:26.852868 containerd[1497]: time="2024-12-13T02:05:26.852811457Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:05:26.855568 containerd[1497]: time="2024-12-13T02:05:26.855427068Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 02:05:26.857503 containerd[1497]: time="2024-12-13T02:05:26.857395892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:05:26.859468 containerd[1497]: time="2024-12-13T02:05:26.859402837Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:05:26.862847 containerd[1497]: time="2024-12-13T02:05:26.862736423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:05:26.872051 containerd[1497]: time="2024-12-13T02:05:26.869990363Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:05:26.872051 containerd[1497]: time="2024-12-13T02:05:26.871846954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 640.446916ms" Dec 13 02:05:26.875634 containerd[1497]: time="2024-12-13T02:05:26.875479244Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 636.742821ms" Dec 13 02:05:26.879317 containerd[1497]: time="2024-12-13T02:05:26.876810424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:05:26.879828 containerd[1497]: time="2024-12-13T02:05:26.876834891Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.872016ms" Dec 13 02:05:27.087339 containerd[1497]: time="2024-12-13T02:05:27.086845848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:27.087339 containerd[1497]: time="2024-12-13T02:05:27.086916642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:27.087339 containerd[1497]: time="2024-12-13T02:05:27.086931921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:27.087339 containerd[1497]: time="2024-12-13T02:05:27.087032600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:27.092282 containerd[1497]: time="2024-12-13T02:05:27.091833768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:27.092282 containerd[1497]: time="2024-12-13T02:05:27.091882849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:27.092282 containerd[1497]: time="2024-12-13T02:05:27.091893711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:27.092282 containerd[1497]: time="2024-12-13T02:05:27.091961618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:27.097493 containerd[1497]: time="2024-12-13T02:05:27.097162889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:27.097493 containerd[1497]: time="2024-12-13T02:05:27.097213245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:27.097493 containerd[1497]: time="2024-12-13T02:05:27.097227952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:27.097493 containerd[1497]: time="2024-12-13T02:05:27.097315828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:27.134203 systemd[1]: Started cri-containerd-c38d7c2bba2e7eeb132fbbf8908aaf76530ad5f6b70be3098838af7744e5fa2f.scope - libcontainer container c38d7c2bba2e7eeb132fbbf8908aaf76530ad5f6b70be3098838af7744e5fa2f. Dec 13 02:05:27.146282 systemd[1]: Started cri-containerd-793d4c20f2751a07eb18ba333c47a59d7842e2cd3bfa338973974ce514368aa5.scope - libcontainer container 793d4c20f2751a07eb18ba333c47a59d7842e2cd3bfa338973974ce514368aa5. Dec 13 02:05:27.148697 systemd[1]: Started cri-containerd-ebef83f307dc378e537a52890cbd42473331f50d1789f7d41e4737a89e36c116.scope - libcontainer container ebef83f307dc378e537a52890cbd42473331f50d1789f7d41e4737a89e36c116. Dec 13 02:05:27.162687 kubelet[2441]: E1213 02:05:27.162650 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.63.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-3-45a43b40ef?timeout=10s\": dial tcp 49.13.63.199:6443: connect: connection refused" interval="1.6s" Dec 13 02:05:27.183187 kubelet[2441]: W1213 02:05:27.183110 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.63.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:27.184135 kubelet[2441]: E1213 02:05:27.184113 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.63.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:27.210604 containerd[1497]: time="2024-12-13T02:05:27.210135463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-3-45a43b40ef,Uid:577c84c15fa29de8f004a1f0ec46ca0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"793d4c20f2751a07eb18ba333c47a59d7842e2cd3bfa338973974ce514368aa5\"" Dec 13 02:05:27.216778 containerd[1497]: time="2024-12-13T02:05:27.216722108Z" level=info msg="CreateContainer within sandbox \"793d4c20f2751a07eb18ba333c47a59d7842e2cd3bfa338973974ce514368aa5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:05:27.249566 containerd[1497]: time="2024-12-13T02:05:27.249425938Z" level=info msg="CreateContainer within sandbox \"793d4c20f2751a07eb18ba333c47a59d7842e2cd3bfa338973974ce514368aa5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb\"" Dec 13 02:05:27.252322 containerd[1497]: time="2024-12-13T02:05:27.252203768Z" level=info msg="StartContainer for \"451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb\"" Dec 13 02:05:27.254132 containerd[1497]: time="2024-12-13T02:05:27.254108471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-3-45a43b40ef,Uid:9a7f7d4c2b72e0a40467ceebfc0774c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c38d7c2bba2e7eeb132fbbf8908aaf76530ad5f6b70be3098838af7744e5fa2f\"" Dec 13 02:05:27.255881 containerd[1497]: time="2024-12-13T02:05:27.255855647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-3-45a43b40ef,Uid:1ad593c967432868037640a67945c522,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebef83f307dc378e537a52890cbd42473331f50d1789f7d41e4737a89e36c116\"" Dec 13 02:05:27.259460 containerd[1497]: time="2024-12-13T02:05:27.258851558Z" level=info msg="CreateContainer within sandbox \"c38d7c2bba2e7eeb132fbbf8908aaf76530ad5f6b70be3098838af7744e5fa2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:05:27.262533 containerd[1497]: time="2024-12-13T02:05:27.262416162Z" level=info msg="CreateContainer within sandbox \"ebef83f307dc378e537a52890cbd42473331f50d1789f7d41e4737a89e36c116\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:05:27.265749 kubelet[2441]: I1213 02:05:27.265483 2441 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:27.266480 kubelet[2441]: E1213 02:05:27.266463 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.63.199:6443/api/v1/nodes\": dial tcp 49.13.63.199:6443: connect: connection refused" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:27.279779 containerd[1497]: time="2024-12-13T02:05:27.279719221Z" level=info msg="CreateContainer within sandbox \"c38d7c2bba2e7eeb132fbbf8908aaf76530ad5f6b70be3098838af7744e5fa2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260\"" Dec 13 02:05:27.281094 containerd[1497]: time="2024-12-13T02:05:27.280571839Z" level=info msg="StartContainer for \"b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260\"" Dec 13 02:05:27.289990 systemd[1]: Started cri-containerd-451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb.scope - libcontainer container 451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb. Dec 13 02:05:27.298868 containerd[1497]: time="2024-12-13T02:05:27.298801377Z" level=info msg="CreateContainer within sandbox \"ebef83f307dc378e537a52890cbd42473331f50d1789f7d41e4737a89e36c116\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c4d196911bf42e3a7a9bb550dabd9430505d12ea7f8d35a93f1445d8de3b315\"" Dec 13 02:05:27.299999 containerd[1497]: time="2024-12-13T02:05:27.299957227Z" level=info msg="StartContainer for \"5c4d196911bf42e3a7a9bb550dabd9430505d12ea7f8d35a93f1445d8de3b315\"" Dec 13 02:05:27.325181 systemd[1]: Started cri-containerd-b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260.scope - libcontainer container b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260. Dec 13 02:05:27.348156 systemd[1]: Started cri-containerd-5c4d196911bf42e3a7a9bb550dabd9430505d12ea7f8d35a93f1445d8de3b315.scope - libcontainer container 5c4d196911bf42e3a7a9bb550dabd9430505d12ea7f8d35a93f1445d8de3b315. Dec 13 02:05:27.393478 containerd[1497]: time="2024-12-13T02:05:27.393051143Z" level=info msg="StartContainer for \"451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb\" returns successfully" Dec 13 02:05:27.414673 containerd[1497]: time="2024-12-13T02:05:27.414499231Z" level=info msg="StartContainer for \"b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260\" returns successfully" Dec 13 02:05:27.430831 containerd[1497]: time="2024-12-13T02:05:27.430671407Z" level=info msg="StartContainer for \"5c4d196911bf42e3a7a9bb550dabd9430505d12ea7f8d35a93f1445d8de3b315\" returns successfully" Dec 13 02:05:27.738545 kubelet[2441]: E1213 02:05:27.738510 2441 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.63.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.63.199:6443: connect: connection refused Dec 13 02:05:28.870144 kubelet[2441]: I1213 02:05:28.869468 2441 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:29.239381 kubelet[2441]: E1213 02:05:29.239324 2441 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-3-45a43b40ef\" not found" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:29.314581 kubelet[2441]: I1213 02:05:29.314385 2441 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:29.720777 kubelet[2441]: I1213 02:05:29.720700 2441 apiserver.go:52] "Watching apiserver" Dec 13 02:05:29.743519 kubelet[2441]: I1213 02:05:29.743389 2441 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:05:32.213812 systemd[1]: Reloading requested from client PID 2718 ('systemctl') (unit session-7.scope)... Dec 13 02:05:32.213832 systemd[1]: Reloading... Dec 13 02:05:32.347052 zram_generator::config[2761]: No configuration found. Dec 13 02:05:32.478412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:05:32.584709 systemd[1]: Reloading finished in 370 ms. Dec 13 02:05:32.644546 kubelet[2441]: I1213 02:05:32.642193 2441 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:05:32.644077 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:05:32.661068 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:05:32.661518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:32.661598 systemd[1]: kubelet.service: Consumed 1.325s CPU time, 112.5M memory peak, 0B memory swap peak. Dec 13 02:05:32.670366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:05:32.856320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:05:32.873231 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:05:32.985815 kubelet[2808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:32.985815 kubelet[2808]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:05:32.985815 kubelet[2808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:32.985815 kubelet[2808]: I1213 02:05:32.985469 2808 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:05:32.990830 kubelet[2808]: I1213 02:05:32.990791 2808 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:05:32.990830 kubelet[2808]: I1213 02:05:32.990813 2808 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:05:32.992034 kubelet[2808]: I1213 02:05:32.990970 2808 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:05:32.992531 kubelet[2808]: I1213 02:05:32.992501 2808 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:05:32.994960 kubelet[2808]: I1213 02:05:32.994597 2808 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:05:33.004892 kubelet[2808]: I1213 02:05:33.004845 2808 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:05:33.007788 kubelet[2808]: I1213 02:05:33.007421 2808 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:05:33.007788 kubelet[2808]: I1213 02:05:33.007629 2808 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:05:33.007788 kubelet[2808]: I1213 02:05:33.007660 2808 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:05:33.007788 kubelet[2808]: I1213 02:05:33.007673 2808 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:05:33.007788 kubelet[2808]: I1213 02:05:33.007713 2808 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:33.008128 kubelet[2808]: I1213 02:05:33.008093 2808 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:05:33.008128 kubelet[2808]: I1213 02:05:33.008114 2808 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:05:33.008181 kubelet[2808]: I1213 02:05:33.008146 2808 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:05:33.008181 kubelet[2808]: I1213 02:05:33.008163 2808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:05:33.012798 kubelet[2808]: I1213 02:05:33.012696 2808 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:05:33.016433 kubelet[2808]: I1213 02:05:33.016409 2808 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:05:33.017340 kubelet[2808]: I1213 02:05:33.017320 2808 server.go:1256] "Started kubelet" Dec 13 02:05:33.026248 kubelet[2808]: I1213 02:05:33.025192 2808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:05:33.039712 kubelet[2808]: I1213 02:05:33.039642 2808 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:05:33.041576 kubelet[2808]: I1213 02:05:33.041293 2808 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:05:33.046118 kubelet[2808]: I1213 02:05:33.044857 2808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:05:33.046118 kubelet[2808]: I1213 02:05:33.045054 2808 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:05:33.046860 kubelet[2808]: I1213 02:05:33.046838 2808 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:05:33.048712 kubelet[2808]: I1213 02:05:33.046920 2808 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:05:33.048712 kubelet[2808]: I1213 02:05:33.047088 2808 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:05:33.051137 kubelet[2808]: I1213 02:05:33.051122 2808 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:05:33.052118 kubelet[2808]: I1213 02:05:33.051282 2808 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:05:33.053482 kubelet[2808]: I1213 02:05:33.052933 2808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:05:33.056337 kubelet[2808]: I1213 02:05:33.056307 2808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:05:33.056337 kubelet[2808]: I1213 02:05:33.056348 2808 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:05:33.056484 kubelet[2808]: I1213 02:05:33.056387 2808 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:05:33.056484 kubelet[2808]: E1213 02:05:33.056439 2808 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:05:33.056984 kubelet[2808]: E1213 02:05:33.056972 2808 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:05:33.057429 kubelet[2808]: I1213 02:05:33.057415 2808 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:05:33.120666 kubelet[2808]: I1213 02:05:33.120555 2808 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:05:33.120838 kubelet[2808]: I1213 02:05:33.120824 2808 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:05:33.120915 kubelet[2808]: I1213 02:05:33.120904 2808 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:33.121463 kubelet[2808]: I1213 02:05:33.121446 2808 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:05:33.121762 kubelet[2808]: I1213 02:05:33.121748 2808 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:05:33.121854 kubelet[2808]: I1213 02:05:33.121842 2808 policy_none.go:49] "None policy: Start" Dec 13 02:05:33.122525 kubelet[2808]: I1213 02:05:33.122513 2808 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:05:33.122625 kubelet[2808]: I1213 02:05:33.122615 2808 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:05:33.122890 kubelet[2808]: I1213 02:05:33.122847 2808 state_mem.go:75] "Updated machine memory state" Dec 13 02:05:33.131497 kubelet[2808]: I1213 02:05:33.131477 2808 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:05:33.131897 kubelet[2808]: I1213 02:05:33.131885 2808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:05:33.153518 kubelet[2808]: I1213 02:05:33.153492 2808 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.157793 kubelet[2808]: I1213 02:05:33.156524 2808 topology_manager.go:215] "Topology Admit Handler" podUID="9a7f7d4c2b72e0a40467ceebfc0774c8" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.158061 kubelet[2808]: I1213 02:05:33.158047 2808 topology_manager.go:215] "Topology Admit Handler" podUID="577c84c15fa29de8f004a1f0ec46ca0b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.158206 kubelet[2808]: I1213 02:05:33.158185 2808 topology_manager.go:215] "Topology Admit Handler" podUID="1ad593c967432868037640a67945c522" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.172098 kubelet[2808]: I1213 02:05:33.171859 2808 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.172098 kubelet[2808]: I1213 02:05:33.171949 2808 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248548 kubelet[2808]: I1213 02:05:33.248254 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248548 kubelet[2808]: I1213 02:05:33.248310 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248548 kubelet[2808]: I1213 02:05:33.248331 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ad593c967432868037640a67945c522-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-3-45a43b40ef\" (UID: \"1ad593c967432868037640a67945c522\") " pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248548 kubelet[2808]: I1213 02:05:33.248349 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ad593c967432868037640a67945c522-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-3-45a43b40ef\" (UID: \"1ad593c967432868037640a67945c522\") " pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248548 kubelet[2808]: I1213 02:05:33.248381 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248843 kubelet[2808]: I1213 02:05:33.248399 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248843 kubelet[2808]: I1213 02:05:33.248421 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a7f7d4c2b72e0a40467ceebfc0774c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" (UID: \"9a7f7d4c2b72e0a40467ceebfc0774c8\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248843 kubelet[2808]: I1213 02:05:33.248440 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/577c84c15fa29de8f004a1f0ec46ca0b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-3-45a43b40ef\" (UID: \"577c84c15fa29de8f004a1f0ec46ca0b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:33.248843 kubelet[2808]: I1213 02:05:33.248463 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ad593c967432868037640a67945c522-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-3-45a43b40ef\" (UID: \"1ad593c967432868037640a67945c522\") " pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:34.010039 kubelet[2808]: I1213 02:05:34.009575 2808 apiserver.go:52] "Watching apiserver" Dec 13 02:05:34.047427 kubelet[2808]: I1213 02:05:34.047304 2808 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:05:34.104281 kubelet[2808]: I1213 02:05:34.103891 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" podStartSLOduration=1.103831537 podStartE2EDuration="1.103831537s" podCreationTimestamp="2024-12-13 02:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:34.09134272 +0000 UTC m=+1.200224477" watchObservedRunningTime="2024-12-13 02:05:34.103831537 +0000 UTC m=+1.212713294" Dec 13 02:05:34.106729 kubelet[2808]: E1213 02:05:34.106707 2808 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-3-45a43b40ef\" already exists" pod="kube-system/kube-scheduler-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:34.112340 kubelet[2808]: E1213 02:05:34.112301 2808 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-3-45a43b40ef\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:34.114150 kubelet[2808]: E1213 02:05:34.114127 2808 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-3-45a43b40ef\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-1-3-45a43b40ef" Dec 13 02:05:34.121518 kubelet[2808]: I1213 02:05:34.120899 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-3-45a43b40ef" podStartSLOduration=1.120857906 podStartE2EDuration="1.120857906s" podCreationTimestamp="2024-12-13 02:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:34.104128457 +0000 UTC m=+1.213010215" watchObservedRunningTime="2024-12-13 02:05:34.120857906 +0000 UTC m=+1.229739673" Dec 13 02:05:34.141856 kubelet[2808]: I1213 02:05:34.141797 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-3-45a43b40ef" podStartSLOduration=1.141736354 podStartE2EDuration="1.141736354s" podCreationTimestamp="2024-12-13 02:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:34.123882311 +0000 UTC m=+1.232764067" watchObservedRunningTime="2024-12-13 02:05:34.141736354 +0000 UTC m=+1.250618111" Dec 13 02:05:38.234327 sudo[1898]: pam_unix(sudo:session): session closed for user root Dec 13 02:05:38.395204 sshd[1879]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:38.401761 systemd[1]: sshd@6-49.13.63.199:22-147.75.109.163:38186.service: Deactivated successfully. Dec 13 02:05:38.406935 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:05:38.407582 systemd[1]: session-7.scope: Consumed 5.502s CPU time, 189.3M memory peak, 0B memory swap peak. Dec 13 02:05:38.412144 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:05:38.414504 systemd-logind[1472]: Removed session 7. Dec 13 02:05:45.105700 kubelet[2808]: I1213 02:05:45.105618 2808 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:05:45.106175 containerd[1497]: time="2024-12-13T02:05:45.106068867Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:05:45.106419 kubelet[2808]: I1213 02:05:45.106279 2808 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:05:45.941506 kubelet[2808]: I1213 02:05:45.941414 2808 topology_manager.go:215] "Topology Admit Handler" podUID="cadc958c-7105-4bdf-896c-b4c1ecef34d3" podNamespace="kube-system" podName="kube-proxy-ms8rf" Dec 13 02:05:45.974888 systemd[1]: Created slice kubepods-besteffort-podcadc958c_7105_4bdf_896c_b4c1ecef34d3.slice - libcontainer container kubepods-besteffort-podcadc958c_7105_4bdf_896c_b4c1ecef34d3.slice. Dec 13 02:05:46.043643 kubelet[2808]: I1213 02:05:46.043548 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxjc\" (UniqueName: \"kubernetes.io/projected/cadc958c-7105-4bdf-896c-b4c1ecef34d3-kube-api-access-4zxjc\") pod \"kube-proxy-ms8rf\" (UID: \"cadc958c-7105-4bdf-896c-b4c1ecef34d3\") " pod="kube-system/kube-proxy-ms8rf" Dec 13 02:05:46.043643 kubelet[2808]: I1213 02:05:46.043639 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cadc958c-7105-4bdf-896c-b4c1ecef34d3-kube-proxy\") pod \"kube-proxy-ms8rf\" (UID: \"cadc958c-7105-4bdf-896c-b4c1ecef34d3\") " pod="kube-system/kube-proxy-ms8rf" Dec 13 02:05:46.043892 kubelet[2808]: I1213 02:05:46.043674 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cadc958c-7105-4bdf-896c-b4c1ecef34d3-lib-modules\") pod \"kube-proxy-ms8rf\" (UID: \"cadc958c-7105-4bdf-896c-b4c1ecef34d3\") " pod="kube-system/kube-proxy-ms8rf" Dec 13 02:05:46.043892 kubelet[2808]: I1213 02:05:46.043705 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cadc958c-7105-4bdf-896c-b4c1ecef34d3-xtables-lock\") pod \"kube-proxy-ms8rf\" (UID: \"cadc958c-7105-4bdf-896c-b4c1ecef34d3\") " pod="kube-system/kube-proxy-ms8rf" Dec 13 02:05:46.234042 kubelet[2808]: I1213 02:05:46.233588 2808 topology_manager.go:215] "Topology Admit Handler" podUID="fd9d6ac2-a254-4abb-801b-906ed5032349" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-f4cpv" Dec 13 02:05:46.243557 systemd[1]: Created slice kubepods-besteffort-podfd9d6ac2_a254_4abb_801b_906ed5032349.slice - libcontainer container kubepods-besteffort-podfd9d6ac2_a254_4abb_801b_906ed5032349.slice. Dec 13 02:05:46.285516 containerd[1497]: time="2024-12-13T02:05:46.285444545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ms8rf,Uid:cadc958c-7105-4bdf-896c-b4c1ecef34d3,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:46.331935 containerd[1497]: time="2024-12-13T02:05:46.329868835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:46.332419 containerd[1497]: time="2024-12-13T02:05:46.332098728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:46.332604 containerd[1497]: time="2024-12-13T02:05:46.332500838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:46.333215 containerd[1497]: time="2024-12-13T02:05:46.333098350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:46.345578 kubelet[2808]: I1213 02:05:46.345515 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq894\" (UniqueName: \"kubernetes.io/projected/fd9d6ac2-a254-4abb-801b-906ed5032349-kube-api-access-tq894\") pod \"tigera-operator-c7ccbd65-f4cpv\" (UID: \"fd9d6ac2-a254-4abb-801b-906ed5032349\") " pod="tigera-operator/tigera-operator-c7ccbd65-f4cpv" Dec 13 02:05:46.345578 kubelet[2808]: I1213 02:05:46.345577 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd9d6ac2-a254-4abb-801b-906ed5032349-var-lib-calico\") pod \"tigera-operator-c7ccbd65-f4cpv\" (UID: \"fd9d6ac2-a254-4abb-801b-906ed5032349\") " pod="tigera-operator/tigera-operator-c7ccbd65-f4cpv" Dec 13 02:05:46.376566 systemd[1]: Started cri-containerd-67c99e900102e96508b1d6d328f684ccd3a7177f1af4a9ba52a5b3e308dd9687.scope - libcontainer container 67c99e900102e96508b1d6d328f684ccd3a7177f1af4a9ba52a5b3e308dd9687. Dec 13 02:05:46.416333 containerd[1497]: time="2024-12-13T02:05:46.416201130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ms8rf,Uid:cadc958c-7105-4bdf-896c-b4c1ecef34d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"67c99e900102e96508b1d6d328f684ccd3a7177f1af4a9ba52a5b3e308dd9687\"" Dec 13 02:05:46.420935 containerd[1497]: time="2024-12-13T02:05:46.420883446Z" level=info msg="CreateContainer within sandbox \"67c99e900102e96508b1d6d328f684ccd3a7177f1af4a9ba52a5b3e308dd9687\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:05:46.450568 containerd[1497]: time="2024-12-13T02:05:46.450375102Z" level=info msg="CreateContainer within sandbox \"67c99e900102e96508b1d6d328f684ccd3a7177f1af4a9ba52a5b3e308dd9687\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01f2ac7c01f34632cb8cac12caa6990ff0e2f3cc7e978e703fb7d032c6d4124d\"" Dec 13 02:05:46.453195 containerd[1497]: time="2024-12-13T02:05:46.452413952Z" level=info msg="StartContainer for \"01f2ac7c01f34632cb8cac12caa6990ff0e2f3cc7e978e703fb7d032c6d4124d\"" Dec 13 02:05:46.503131 systemd[1]: Started cri-containerd-01f2ac7c01f34632cb8cac12caa6990ff0e2f3cc7e978e703fb7d032c6d4124d.scope - libcontainer container 01f2ac7c01f34632cb8cac12caa6990ff0e2f3cc7e978e703fb7d032c6d4124d. Dec 13 02:05:46.548565 containerd[1497]: time="2024-12-13T02:05:46.548523500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-f4cpv,Uid:fd9d6ac2-a254-4abb-801b-906ed5032349,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:05:46.560898 containerd[1497]: time="2024-12-13T02:05:46.560826784Z" level=info msg="StartContainer for \"01f2ac7c01f34632cb8cac12caa6990ff0e2f3cc7e978e703fb7d032c6d4124d\" returns successfully" Dec 13 02:05:46.580567 containerd[1497]: time="2024-12-13T02:05:46.580088763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:46.580567 containerd[1497]: time="2024-12-13T02:05:46.580148526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:46.580567 containerd[1497]: time="2024-12-13T02:05:46.580162161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:46.580567 containerd[1497]: time="2024-12-13T02:05:46.580245009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:46.603189 systemd[1]: Started cri-containerd-8eaa339d4163036802ba2207ac4f985589964539f50fab2e92387dc6e39635ff.scope - libcontainer container 8eaa339d4163036802ba2207ac4f985589964539f50fab2e92387dc6e39635ff. Dec 13 02:05:46.655126 containerd[1497]: time="2024-12-13T02:05:46.655065779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-f4cpv,Uid:fd9d6ac2-a254-4abb-801b-906ed5032349,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8eaa339d4163036802ba2207ac4f985589964539f50fab2e92387dc6e39635ff\"" Dec 13 02:05:46.658760 containerd[1497]: time="2024-12-13T02:05:46.658639826Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:05:47.179880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894011609.mount: Deactivated successfully. Dec 13 02:05:48.871044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169363470.mount: Deactivated successfully. Dec 13 02:05:49.304936 containerd[1497]: time="2024-12-13T02:05:49.304869387Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:49.306121 containerd[1497]: time="2024-12-13T02:05:49.306082935Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764265" Dec 13 02:05:49.308441 containerd[1497]: time="2024-12-13T02:05:49.307189402Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:49.309861 containerd[1497]: time="2024-12-13T02:05:49.309532790Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:49.325040 containerd[1497]: time="2024-12-13T02:05:49.324855011Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.666185339s" Dec 13 02:05:49.325040 containerd[1497]: time="2024-12-13T02:05:49.324889006Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 02:05:49.327466 containerd[1497]: time="2024-12-13T02:05:49.327436501Z" level=info msg="CreateContainer within sandbox \"8eaa339d4163036802ba2207ac4f985589964539f50fab2e92387dc6e39635ff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:05:49.353158 containerd[1497]: time="2024-12-13T02:05:49.353107324Z" level=info msg="CreateContainer within sandbox \"8eaa339d4163036802ba2207ac4f985589964539f50fab2e92387dc6e39635ff\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1\"" Dec 13 02:05:49.354742 containerd[1497]: time="2024-12-13T02:05:49.353610568Z" level=info msg="StartContainer for \"deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1\"" Dec 13 02:05:49.403295 systemd[1]: Started cri-containerd-deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1.scope - libcontainer container deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1. Dec 13 02:05:49.455174 containerd[1497]: time="2024-12-13T02:05:49.454578560Z" level=info msg="StartContainer for \"deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1\" returns successfully" Dec 13 02:05:50.159150 kubelet[2808]: I1213 02:05:50.158468 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ms8rf" podStartSLOduration=5.158414328 podStartE2EDuration="5.158414328s" podCreationTimestamp="2024-12-13 02:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:47.144913778 +0000 UTC m=+14.253795545" watchObservedRunningTime="2024-12-13 02:05:50.158414328 +0000 UTC m=+17.267296085" Dec 13 02:05:52.496086 kubelet[2808]: I1213 02:05:52.495975 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-f4cpv" podStartSLOduration=3.82813908 podStartE2EDuration="6.495930912s" podCreationTimestamp="2024-12-13 02:05:46 +0000 UTC" firstStartedPulling="2024-12-13 02:05:46.657403525 +0000 UTC m=+13.766285282" lastFinishedPulling="2024-12-13 02:05:49.325195356 +0000 UTC m=+16.434077114" observedRunningTime="2024-12-13 02:05:50.158949241 +0000 UTC m=+17.267831078" watchObservedRunningTime="2024-12-13 02:05:52.495930912 +0000 UTC m=+19.604812669" Dec 13 02:05:52.496604 kubelet[2808]: I1213 02:05:52.496123 2808 topology_manager.go:215] "Topology Admit Handler" podUID="6980c939-3e86-4220-b3d0-4206afd1b481" podNamespace="calico-system" podName="calico-typha-567c8d85bc-n72fk" Dec 13 02:05:52.505993 systemd[1]: Created slice kubepods-besteffort-pod6980c939_3e86_4220_b3d0_4206afd1b481.slice - libcontainer container kubepods-besteffort-pod6980c939_3e86_4220_b3d0_4206afd1b481.slice. Dec 13 02:05:52.588994 kubelet[2808]: I1213 02:05:52.588825 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6980c939-3e86-4220-b3d0-4206afd1b481-tigera-ca-bundle\") pod \"calico-typha-567c8d85bc-n72fk\" (UID: \"6980c939-3e86-4220-b3d0-4206afd1b481\") " pod="calico-system/calico-typha-567c8d85bc-n72fk" Dec 13 02:05:52.588994 kubelet[2808]: I1213 02:05:52.588875 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2x6\" (UniqueName: \"kubernetes.io/projected/6980c939-3e86-4220-b3d0-4206afd1b481-kube-api-access-qq2x6\") pod \"calico-typha-567c8d85bc-n72fk\" (UID: \"6980c939-3e86-4220-b3d0-4206afd1b481\") " pod="calico-system/calico-typha-567c8d85bc-n72fk" Dec 13 02:05:52.588994 kubelet[2808]: I1213 02:05:52.588897 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6980c939-3e86-4220-b3d0-4206afd1b481-typha-certs\") pod \"calico-typha-567c8d85bc-n72fk\" (UID: \"6980c939-3e86-4220-b3d0-4206afd1b481\") " pod="calico-system/calico-typha-567c8d85bc-n72fk" Dec 13 02:05:52.591079 kubelet[2808]: I1213 02:05:52.590432 2808 topology_manager.go:215] "Topology Admit Handler" podUID="7b355148-e5a8-4252-91ff-1883b35ca1d4" podNamespace="calico-system" podName="calico-node-8qxk9" Dec 13 02:05:52.601704 systemd[1]: Created slice kubepods-besteffort-pod7b355148_e5a8_4252_91ff_1883b35ca1d4.slice - libcontainer container kubepods-besteffort-pod7b355148_e5a8_4252_91ff_1883b35ca1d4.slice. Dec 13 02:05:52.689801 kubelet[2808]: I1213 02:05:52.689718 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7jv\" (UniqueName: \"kubernetes.io/projected/7b355148-e5a8-4252-91ff-1883b35ca1d4-kube-api-access-fz7jv\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.689801 kubelet[2808]: I1213 02:05:52.689786 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-var-lib-calico\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.689801 kubelet[2808]: I1213 02:05:52.689807 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-cni-bin-dir\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690118 kubelet[2808]: I1213 02:05:52.689826 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-cni-log-dir\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690118 kubelet[2808]: I1213 02:05:52.689887 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-cni-net-dir\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690118 kubelet[2808]: I1213 02:05:52.689908 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-var-run-calico\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690118 kubelet[2808]: I1213 02:05:52.689927 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b355148-e5a8-4252-91ff-1883b35ca1d4-tigera-ca-bundle\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690118 kubelet[2808]: I1213 02:05:52.689946 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-xtables-lock\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690237 kubelet[2808]: I1213 02:05:52.689964 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-policysync\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690237 kubelet[2808]: I1213 02:05:52.689982 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b355148-e5a8-4252-91ff-1883b35ca1d4-node-certs\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690237 kubelet[2808]: I1213 02:05:52.690001 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-flexvol-driver-host\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.690237 kubelet[2808]: I1213 02:05:52.690044 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b355148-e5a8-4252-91ff-1883b35ca1d4-lib-modules\") pod \"calico-node-8qxk9\" (UID: \"7b355148-e5a8-4252-91ff-1883b35ca1d4\") " pod="calico-system/calico-node-8qxk9" Dec 13 02:05:52.743123 kubelet[2808]: I1213 02:05:52.743053 2808 topology_manager.go:215] "Topology Admit Handler" podUID="ef8df160-b623-42c6-abf0-520b155176f4" podNamespace="calico-system" podName="csi-node-driver-7r499" Dec 13 02:05:52.743552 kubelet[2808]: E1213 02:05:52.743397 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:05:52.791499 kubelet[2808]: I1213 02:05:52.791345 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef8df160-b623-42c6-abf0-520b155176f4-socket-dir\") pod \"csi-node-driver-7r499\" (UID: \"ef8df160-b623-42c6-abf0-520b155176f4\") " pod="calico-system/csi-node-driver-7r499" Dec 13 02:05:52.791499 kubelet[2808]: I1213 02:05:52.791468 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef8df160-b623-42c6-abf0-520b155176f4-varrun\") pod \"csi-node-driver-7r499\" (UID: \"ef8df160-b623-42c6-abf0-520b155176f4\") " pod="calico-system/csi-node-driver-7r499" Dec 13 02:05:52.791499 kubelet[2808]: I1213 02:05:52.791490 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef8df160-b623-42c6-abf0-520b155176f4-kubelet-dir\") pod \"csi-node-driver-7r499\" (UID: \"ef8df160-b623-42c6-abf0-520b155176f4\") " pod="calico-system/csi-node-driver-7r499" Dec 13 02:05:52.791787 kubelet[2808]: I1213 02:05:52.791510 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef8df160-b623-42c6-abf0-520b155176f4-registration-dir\") pod \"csi-node-driver-7r499\" (UID: \"ef8df160-b623-42c6-abf0-520b155176f4\") " pod="calico-system/csi-node-driver-7r499" Dec 13 02:05:52.791787 kubelet[2808]: I1213 02:05:52.791541 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9f5\" (UniqueName: \"kubernetes.io/projected/ef8df160-b623-42c6-abf0-520b155176f4-kube-api-access-qn9f5\") pod \"csi-node-driver-7r499\" (UID: \"ef8df160-b623-42c6-abf0-520b155176f4\") " pod="calico-system/csi-node-driver-7r499" Dec 13 02:05:52.805957 kubelet[2808]: E1213 02:05:52.805908 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.805957 kubelet[2808]: W1213 02:05:52.805947 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.806118 kubelet[2808]: E1213 02:05:52.805984 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.816082 kubelet[2808]: E1213 02:05:52.816036 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.816082 kubelet[2808]: W1213 02:05:52.816062 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.816082 kubelet[2808]: E1213 02:05:52.816084 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.818443 containerd[1497]: time="2024-12-13T02:05:52.818309502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-567c8d85bc-n72fk,Uid:6980c939-3e86-4220-b3d0-4206afd1b481,Namespace:calico-system,Attempt:0,}" Dec 13 02:05:52.885067 containerd[1497]: time="2024-12-13T02:05:52.884498596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:52.885067 containerd[1497]: time="2024-12-13T02:05:52.884560573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:52.885067 containerd[1497]: time="2024-12-13T02:05:52.884576113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:52.885067 containerd[1497]: time="2024-12-13T02:05:52.884658078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:52.894785 kubelet[2808]: E1213 02:05:52.893712 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.894785 kubelet[2808]: W1213 02:05:52.893733 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.894785 kubelet[2808]: E1213 02:05:52.893822 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.894785 kubelet[2808]: E1213 02:05:52.894211 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.894785 kubelet[2808]: W1213 02:05:52.894219 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.894785 kubelet[2808]: E1213 02:05:52.894235 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.894785 kubelet[2808]: E1213 02:05:52.894592 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.894785 kubelet[2808]: W1213 02:05:52.894600 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.894785 kubelet[2808]: E1213 02:05:52.894614 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.896177 kubelet[2808]: E1213 02:05:52.896166 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.896317 kubelet[2808]: W1213 02:05:52.896292 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.896511 kubelet[2808]: E1213 02:05:52.896374 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.902658 kubelet[2808]: E1213 02:05:52.902643 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.903290 kubelet[2808]: W1213 02:05:52.902813 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.903589 kubelet[2808]: E1213 02:05:52.903527 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.904374 kubelet[2808]: E1213 02:05:52.904287 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.904374 kubelet[2808]: W1213 02:05:52.904298 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.904922 kubelet[2808]: E1213 02:05:52.904835 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.904922 kubelet[2808]: W1213 02:05:52.904846 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.905114 kubelet[2808]: E1213 02:05:52.905067 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.905114 kubelet[2808]: E1213 02:05:52.905088 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.905329 kubelet[2808]: E1213 02:05:52.905227 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.905329 kubelet[2808]: W1213 02:05:52.905235 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.905470 kubelet[2808]: E1213 02:05:52.905459 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.905692 kubelet[2808]: E1213 02:05:52.905615 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.905692 kubelet[2808]: W1213 02:05:52.905624 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.905914 kubelet[2808]: E1213 02:05:52.905805 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.906627 kubelet[2808]: E1213 02:05:52.906482 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.906627 kubelet[2808]: W1213 02:05:52.906492 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.906627 kubelet[2808]: E1213 02:05:52.906581 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.907276 containerd[1497]: time="2024-12-13T02:05:52.907235007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8qxk9,Uid:7b355148-e5a8-4252-91ff-1883b35ca1d4,Namespace:calico-system,Attempt:0,}" Dec 13 02:05:52.907659 kubelet[2808]: E1213 02:05:52.907646 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.907729 kubelet[2808]: W1213 02:05:52.907718 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.907937 kubelet[2808]: E1213 02:05:52.907918 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.908165 kubelet[2808]: E1213 02:05:52.908155 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.908237 kubelet[2808]: W1213 02:05:52.908227 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.908520 kubelet[2808]: E1213 02:05:52.908404 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.908635 kubelet[2808]: E1213 02:05:52.908624 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.908840 kubelet[2808]: W1213 02:05:52.908737 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.909034 kubelet[2808]: E1213 02:05:52.908948 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.909489 kubelet[2808]: E1213 02:05:52.909308 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.909489 kubelet[2808]: W1213 02:05:52.909318 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.909489 kubelet[2808]: E1213 02:05:52.909399 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.909701 kubelet[2808]: E1213 02:05:52.909622 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.909701 kubelet[2808]: W1213 02:05:52.909632 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.909821 kubelet[2808]: E1213 02:05:52.909809 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.910260 kubelet[2808]: E1213 02:05:52.910101 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.910260 kubelet[2808]: W1213 02:05:52.910112 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.910260 kubelet[2808]: E1213 02:05:52.910203 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.911499 kubelet[2808]: E1213 02:05:52.911295 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.911499 kubelet[2808]: W1213 02:05:52.911307 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.912461 kubelet[2808]: E1213 02:05:52.911611 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.912461 kubelet[2808]: E1213 02:05:52.911851 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.912461 kubelet[2808]: W1213 02:05:52.911860 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.912461 kubelet[2808]: E1213 02:05:52.912051 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.913321 kubelet[2808]: E1213 02:05:52.913146 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.913321 kubelet[2808]: W1213 02:05:52.913157 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.913926 kubelet[2808]: E1213 02:05:52.913445 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.913926 kubelet[2808]: W1213 02:05:52.913497 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.913926 kubelet[2808]: E1213 02:05:52.913814 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.913926 kubelet[2808]: W1213 02:05:52.913822 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.913926 kubelet[2808]: E1213 02:05:52.913823 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.913926 kubelet[2808]: E1213 02:05:52.913860 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.913926 kubelet[2808]: E1213 02:05:52.913880 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.914504 kubelet[2808]: E1213 02:05:52.914059 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.914504 kubelet[2808]: W1213 02:05:52.914067 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.914504 kubelet[2808]: E1213 02:05:52.914391 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.914504 kubelet[2808]: E1213 02:05:52.914476 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.914504 kubelet[2808]: W1213 02:05:52.914482 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.914657 kubelet[2808]: E1213 02:05:52.914557 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.915144 kubelet[2808]: E1213 02:05:52.914827 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.915144 kubelet[2808]: W1213 02:05:52.914840 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.915144 kubelet[2808]: E1213 02:05:52.914876 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.915878 kubelet[2808]: E1213 02:05:52.915499 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.915878 kubelet[2808]: W1213 02:05:52.915521 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.915878 kubelet[2808]: E1213 02:05:52.915533 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.923176 systemd[1]: Started cri-containerd-e49139cb573bb04e6e8b50e9055bd5942d45c4a40351c4ec315a50772ab90701.scope - libcontainer container e49139cb573bb04e6e8b50e9055bd5942d45c4a40351c4ec315a50772ab90701. Dec 13 02:05:52.932257 kubelet[2808]: E1213 02:05:52.932190 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:52.932257 kubelet[2808]: W1213 02:05:52.932239 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:52.932257 kubelet[2808]: E1213 02:05:52.932259 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:52.976587 containerd[1497]: time="2024-12-13T02:05:52.976470083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:52.977267 containerd[1497]: time="2024-12-13T02:05:52.977099626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:52.978042 containerd[1497]: time="2024-12-13T02:05:52.977409683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:52.978345 containerd[1497]: time="2024-12-13T02:05:52.978215620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:53.029168 systemd[1]: Started cri-containerd-90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13.scope - libcontainer container 90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13. Dec 13 02:05:53.056384 containerd[1497]: time="2024-12-13T02:05:53.055998615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-567c8d85bc-n72fk,Uid:6980c939-3e86-4220-b3d0-4206afd1b481,Namespace:calico-system,Attempt:0,} returns sandbox id \"e49139cb573bb04e6e8b50e9055bd5942d45c4a40351c4ec315a50772ab90701\"" Dec 13 02:05:53.063856 containerd[1497]: time="2024-12-13T02:05:53.062461863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:05:53.088702 containerd[1497]: time="2024-12-13T02:05:53.088666040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8qxk9,Uid:7b355148-e5a8-4252-91ff-1883b35ca1d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13\"" Dec 13 02:05:54.057819 kubelet[2808]: E1213 02:05:54.057697 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:05:54.647864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528608068.mount: Deactivated successfully. Dec 13 02:05:55.540699 containerd[1497]: time="2024-12-13T02:05:55.540602950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:55.542436 containerd[1497]: time="2024-12-13T02:05:55.542369349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 02:05:55.543767 containerd[1497]: time="2024-12-13T02:05:55.543713205Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:55.546504 containerd[1497]: time="2024-12-13T02:05:55.546454400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:55.547412 containerd[1497]: time="2024-12-13T02:05:55.547279765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.484118587s" Dec 13 02:05:55.547412 containerd[1497]: time="2024-12-13T02:05:55.547312658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 02:05:55.548171 containerd[1497]: time="2024-12-13T02:05:55.548131369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:05:55.573212 containerd[1497]: time="2024-12-13T02:05:55.573144850Z" level=info msg="CreateContainer within sandbox \"e49139cb573bb04e6e8b50e9055bd5942d45c4a40351c4ec315a50772ab90701\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:05:55.592158 containerd[1497]: time="2024-12-13T02:05:55.592097814Z" level=info msg="CreateContainer within sandbox \"e49139cb573bb04e6e8b50e9055bd5942d45c4a40351c4ec315a50772ab90701\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1aa920d93419067119da4cbf2d8e680b7340c775cfbb824ac66f123a4b8fcd71\"" Dec 13 02:05:55.593938 containerd[1497]: time="2024-12-13T02:05:55.592703682Z" level=info msg="StartContainer for \"1aa920d93419067119da4cbf2d8e680b7340c775cfbb824ac66f123a4b8fcd71\"" Dec 13 02:05:55.660286 systemd[1]: Started cri-containerd-1aa920d93419067119da4cbf2d8e680b7340c775cfbb824ac66f123a4b8fcd71.scope - libcontainer container 1aa920d93419067119da4cbf2d8e680b7340c775cfbb824ac66f123a4b8fcd71. Dec 13 02:05:55.717824 containerd[1497]: time="2024-12-13T02:05:55.717730732Z" level=info msg="StartContainer for \"1aa920d93419067119da4cbf2d8e680b7340c775cfbb824ac66f123a4b8fcd71\" returns successfully" Dec 13 02:05:56.058062 kubelet[2808]: E1213 02:05:56.057138 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:05:56.190979 kubelet[2808]: I1213 02:05:56.189870 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-567c8d85bc-n72fk" podStartSLOduration=1.704284156 podStartE2EDuration="4.189778631s" podCreationTimestamp="2024-12-13 02:05:52 +0000 UTC" firstStartedPulling="2024-12-13 02:05:53.062242958 +0000 UTC m=+20.171124715" lastFinishedPulling="2024-12-13 02:05:55.547737432 +0000 UTC m=+22.656619190" observedRunningTime="2024-12-13 02:05:56.189300986 +0000 UTC m=+23.298182813" watchObservedRunningTime="2024-12-13 02:05:56.189778631 +0000 UTC m=+23.298660418" Dec 13 02:05:56.205561 kubelet[2808]: E1213 02:05:56.205361 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.206434 kubelet[2808]: W1213 02:05:56.205996 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.206434 kubelet[2808]: E1213 02:05:56.206286 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.207827 kubelet[2808]: E1213 02:05:56.207568 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.207827 kubelet[2808]: W1213 02:05:56.207623 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.207827 kubelet[2808]: E1213 02:05:56.207655 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.208642 kubelet[2808]: E1213 02:05:56.208553 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.208642 kubelet[2808]: W1213 02:05:56.208581 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.208642 kubelet[2808]: E1213 02:05:56.208606 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.209760 kubelet[2808]: E1213 02:05:56.209730 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.210072 kubelet[2808]: W1213 02:05:56.209977 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.210205 kubelet[2808]: E1213 02:05:56.210099 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.210940 kubelet[2808]: E1213 02:05:56.210829 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.211124 kubelet[2808]: W1213 02:05:56.210946 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.211124 kubelet[2808]: E1213 02:05:56.211090 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.212209 kubelet[2808]: E1213 02:05:56.211561 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.212209 kubelet[2808]: W1213 02:05:56.211603 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.212209 kubelet[2808]: E1213 02:05:56.211631 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.212453 kubelet[2808]: E1213 02:05:56.212215 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.212453 kubelet[2808]: W1213 02:05:56.212277 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.212453 kubelet[2808]: E1213 02:05:56.212304 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.214159 kubelet[2808]: E1213 02:05:56.212721 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.214159 kubelet[2808]: W1213 02:05:56.212737 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.214159 kubelet[2808]: E1213 02:05:56.212758 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.214159 kubelet[2808]: E1213 02:05:56.213328 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.214159 kubelet[2808]: W1213 02:05:56.213348 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.214159 kubelet[2808]: E1213 02:05:56.213392 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.214535 kubelet[2808]: E1213 02:05:56.214218 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.214535 kubelet[2808]: W1213 02:05:56.214239 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.214535 kubelet[2808]: E1213 02:05:56.214266 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.214704 kubelet[2808]: E1213 02:05:56.214681 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.214754 kubelet[2808]: W1213 02:05:56.214701 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.214754 kubelet[2808]: E1213 02:05:56.214722 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.217085 kubelet[2808]: E1213 02:05:56.215593 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.217085 kubelet[2808]: W1213 02:05:56.215619 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.217085 kubelet[2808]: E1213 02:05:56.215650 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.220058 kubelet[2808]: E1213 02:05:56.219386 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.220058 kubelet[2808]: W1213 02:05:56.219423 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.220058 kubelet[2808]: E1213 02:05:56.219456 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.220267 kubelet[2808]: E1213 02:05:56.220158 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.220267 kubelet[2808]: W1213 02:05:56.220180 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.220267 kubelet[2808]: E1213 02:05:56.220210 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.221041 kubelet[2808]: E1213 02:05:56.220816 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.221041 kubelet[2808]: W1213 02:05:56.220866 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.221041 kubelet[2808]: E1213 02:05:56.220897 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.229236 kubelet[2808]: E1213 02:05:56.229186 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.229236 kubelet[2808]: W1213 02:05:56.229218 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.229236 kubelet[2808]: E1213 02:05:56.229271 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.230208 kubelet[2808]: E1213 02:05:56.229777 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.230208 kubelet[2808]: W1213 02:05:56.229792 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.230208 kubelet[2808]: E1213 02:05:56.229810 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.230466 kubelet[2808]: E1213 02:05:56.230352 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.230466 kubelet[2808]: W1213 02:05:56.230409 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.230466 kubelet[2808]: E1213 02:05:56.230446 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.231631 kubelet[2808]: E1213 02:05:56.230787 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.231631 kubelet[2808]: W1213 02:05:56.230803 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.231631 kubelet[2808]: E1213 02:05:56.230821 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.232775 kubelet[2808]: E1213 02:05:56.232392 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.232775 kubelet[2808]: W1213 02:05:56.232415 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.232775 kubelet[2808]: E1213 02:05:56.232452 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.233060 kubelet[2808]: E1213 02:05:56.232835 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.233060 kubelet[2808]: W1213 02:05:56.232890 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.233179 kubelet[2808]: E1213 02:05:56.233140 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.235705 kubelet[2808]: E1213 02:05:56.235154 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.235705 kubelet[2808]: W1213 02:05:56.235179 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.235705 kubelet[2808]: E1213 02:05:56.235297 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.235705 kubelet[2808]: E1213 02:05:56.235687 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.235705 kubelet[2808]: W1213 02:05:56.235703 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.236151 kubelet[2808]: E1213 02:05:56.235789 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.237046 kubelet[2808]: E1213 02:05:56.236280 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.237046 kubelet[2808]: W1213 02:05:56.236300 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.237046 kubelet[2808]: E1213 02:05:56.236421 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.237750 kubelet[2808]: E1213 02:05:56.237351 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.237750 kubelet[2808]: W1213 02:05:56.237372 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.237750 kubelet[2808]: E1213 02:05:56.237738 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.237943 kubelet[2808]: W1213 02:05:56.237753 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.237996 kubelet[2808]: E1213 02:05:56.237947 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.237996 kubelet[2808]: E1213 02:05:56.237992 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.240061 kubelet[2808]: E1213 02:05:56.238348 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.240061 kubelet[2808]: W1213 02:05:56.238368 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.240061 kubelet[2808]: E1213 02:05:56.238420 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.240061 kubelet[2808]: E1213 02:05:56.238926 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.240061 kubelet[2808]: W1213 02:05:56.238946 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.240061 kubelet[2808]: E1213 02:05:56.238993 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.240061 kubelet[2808]: E1213 02:05:56.239561 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.240061 kubelet[2808]: W1213 02:05:56.239581 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.240061 kubelet[2808]: E1213 02:05:56.239616 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.240600 kubelet[2808]: E1213 02:05:56.240152 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.240600 kubelet[2808]: W1213 02:05:56.240171 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.240600 kubelet[2808]: E1213 02:05:56.240193 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.241395 kubelet[2808]: E1213 02:05:56.241168 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.241395 kubelet[2808]: W1213 02:05:56.241191 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.241563 kubelet[2808]: E1213 02:05:56.241534 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.242045 kubelet[2808]: E1213 02:05:56.241907 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.242045 kubelet[2808]: W1213 02:05:56.241923 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.242045 kubelet[2808]: E1213 02:05:56.241940 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:56.245777 kubelet[2808]: E1213 02:05:56.245075 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:56.245777 kubelet[2808]: W1213 02:05:56.245103 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:56.245777 kubelet[2808]: E1213 02:05:56.245132 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.174959 containerd[1497]: time="2024-12-13T02:05:57.174905714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:57.176697 containerd[1497]: time="2024-12-13T02:05:57.176637077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 02:05:57.177736 containerd[1497]: time="2024-12-13T02:05:57.177701355Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:57.185881 containerd[1497]: time="2024-12-13T02:05:57.185310270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:05:57.188107 containerd[1497]: time="2024-12-13T02:05:57.188066576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.639885843s" Dec 13 02:05:57.188228 containerd[1497]: time="2024-12-13T02:05:57.188213844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 02:05:57.193398 containerd[1497]: time="2024-12-13T02:05:57.193332479Z" level=info msg="CreateContainer within sandbox \"90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:05:57.234659 kubelet[2808]: E1213 02:05:57.234494 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.234659 kubelet[2808]: W1213 02:05:57.234519 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.234659 kubelet[2808]: E1213 02:05:57.234563 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.236048 kubelet[2808]: E1213 02:05:57.234861 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.236048 kubelet[2808]: W1213 02:05:57.234982 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.236048 kubelet[2808]: E1213 02:05:57.234996 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.236048 kubelet[2808]: E1213 02:05:57.235262 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.236048 kubelet[2808]: W1213 02:05:57.235339 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.236048 kubelet[2808]: E1213 02:05:57.235352 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.236048 kubelet[2808]: E1213 02:05:57.235555 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.236048 kubelet[2808]: W1213 02:05:57.235564 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.236048 kubelet[2808]: E1213 02:05:57.235574 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.236048 kubelet[2808]: E1213 02:05:57.235957 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.236392 kubelet[2808]: W1213 02:05:57.235968 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.236392 kubelet[2808]: E1213 02:05:57.235978 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.236392 kubelet[2808]: E1213 02:05:57.236282 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.236392 kubelet[2808]: W1213 02:05:57.236293 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.236392 kubelet[2808]: E1213 02:05:57.236307 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.236647 kubelet[2808]: E1213 02:05:57.236638 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.236683 kubelet[2808]: W1213 02:05:57.236659 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.236683 kubelet[2808]: E1213 02:05:57.236670 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.236929 kubelet[2808]: E1213 02:05:57.236893 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.236929 kubelet[2808]: W1213 02:05:57.236924 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.236929 kubelet[2808]: E1213 02:05:57.236935 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.237263 kubelet[2808]: E1213 02:05:57.237232 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.237263 kubelet[2808]: W1213 02:05:57.237248 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.237263 kubelet[2808]: E1213 02:05:57.237262 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.237577 kubelet[2808]: E1213 02:05:57.237552 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.237577 kubelet[2808]: W1213 02:05:57.237565 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.237577 kubelet[2808]: E1213 02:05:57.237576 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.237859 kubelet[2808]: E1213 02:05:57.237809 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.237859 kubelet[2808]: W1213 02:05:57.237823 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.237859 kubelet[2808]: E1213 02:05:57.237834 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.238370 kubelet[2808]: E1213 02:05:57.238341 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.238370 kubelet[2808]: W1213 02:05:57.238356 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.238370 kubelet[2808]: E1213 02:05:57.238369 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.238620 kubelet[2808]: E1213 02:05:57.238598 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.238620 kubelet[2808]: W1213 02:05:57.238611 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.238620 kubelet[2808]: E1213 02:05:57.238621 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.238981 kubelet[2808]: E1213 02:05:57.238904 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.238981 kubelet[2808]: W1213 02:05:57.238916 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.238981 kubelet[2808]: E1213 02:05:57.238927 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.239306 kubelet[2808]: E1213 02:05:57.239266 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.239306 kubelet[2808]: W1213 02:05:57.239281 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.239306 kubelet[2808]: E1213 02:05:57.239293 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.240167 containerd[1497]: time="2024-12-13T02:05:57.240092657Z" level=info msg="CreateContainer within sandbox \"90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398\"" Dec 13 02:05:57.242294 containerd[1497]: time="2024-12-13T02:05:57.241105638Z" level=info msg="StartContainer for \"b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398\"" Dec 13 02:05:57.251883 kubelet[2808]: E1213 02:05:57.251830 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.251883 kubelet[2808]: W1213 02:05:57.251853 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.252095 kubelet[2808]: E1213 02:05:57.251905 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.252512 kubelet[2808]: E1213 02:05:57.252491 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.252512 kubelet[2808]: W1213 02:05:57.252506 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.252573 kubelet[2808]: E1213 02:05:57.252532 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.252892 kubelet[2808]: E1213 02:05:57.252854 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.252954 kubelet[2808]: W1213 02:05:57.252945 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.252981 kubelet[2808]: E1213 02:05:57.252964 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.253253 kubelet[2808]: E1213 02:05:57.253235 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.253253 kubelet[2808]: W1213 02:05:57.253249 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.253318 kubelet[2808]: E1213 02:05:57.253263 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.253526 kubelet[2808]: E1213 02:05:57.253508 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.253526 kubelet[2808]: W1213 02:05:57.253522 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.253586 kubelet[2808]: E1213 02:05:57.253542 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.253825 kubelet[2808]: E1213 02:05:57.253804 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.253825 kubelet[2808]: W1213 02:05:57.253818 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.253889 kubelet[2808]: E1213 02:05:57.253878 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.254274 kubelet[2808]: E1213 02:05:57.254251 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.254274 kubelet[2808]: W1213 02:05:57.254266 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.254349 kubelet[2808]: E1213 02:05:57.254278 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.254604 kubelet[2808]: E1213 02:05:57.254580 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.254604 kubelet[2808]: W1213 02:05:57.254594 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.254663 kubelet[2808]: E1213 02:05:57.254631 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.255082 kubelet[2808]: E1213 02:05:57.254998 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.255082 kubelet[2808]: W1213 02:05:57.255030 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.255381 kubelet[2808]: E1213 02:05:57.255353 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.255458 kubelet[2808]: E1213 02:05:57.255436 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.255458 kubelet[2808]: W1213 02:05:57.255448 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.255553 kubelet[2808]: E1213 02:05:57.255528 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.257108 kubelet[2808]: E1213 02:05:57.257087 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.257108 kubelet[2808]: W1213 02:05:57.257103 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.257182 kubelet[2808]: E1213 02:05:57.257118 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.257366 kubelet[2808]: E1213 02:05:57.257345 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.257366 kubelet[2808]: W1213 02:05:57.257359 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.257419 kubelet[2808]: E1213 02:05:57.257398 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.261985 kubelet[2808]: E1213 02:05:57.261962 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.261985 kubelet[2808]: W1213 02:05:57.261984 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.262682 kubelet[2808]: E1213 02:05:57.262655 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.263501 kubelet[2808]: E1213 02:05:57.263368 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.263501 kubelet[2808]: W1213 02:05:57.263390 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.263501 kubelet[2808]: E1213 02:05:57.263424 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.264002 kubelet[2808]: E1213 02:05:57.263973 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.264002 kubelet[2808]: W1213 02:05:57.263990 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.264135 kubelet[2808]: E1213 02:05:57.264077 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.264454 kubelet[2808]: E1213 02:05:57.264437 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.264454 kubelet[2808]: W1213 02:05:57.264449 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.264520 kubelet[2808]: E1213 02:05:57.264471 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.265349 kubelet[2808]: E1213 02:05:57.265325 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.265349 kubelet[2808]: W1213 02:05:57.265339 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.265349 kubelet[2808]: E1213 02:05:57.265351 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.267589 kubelet[2808]: E1213 02:05:57.267491 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:05:57.267589 kubelet[2808]: W1213 02:05:57.267505 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:05:57.267589 kubelet[2808]: E1213 02:05:57.267516 2808 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:05:57.287215 systemd[1]: Started cri-containerd-b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398.scope - libcontainer container b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398. Dec 13 02:05:57.325706 containerd[1497]: time="2024-12-13T02:05:57.325544328Z" level=info msg="StartContainer for \"b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398\" returns successfully" Dec 13 02:05:57.344076 systemd[1]: cri-containerd-b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398.scope: Deactivated successfully. Dec 13 02:05:57.376426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398-rootfs.mount: Deactivated successfully. Dec 13 02:05:57.475450 containerd[1497]: time="2024-12-13T02:05:57.457096857Z" level=info msg="shim disconnected" id=b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398 namespace=k8s.io Dec 13 02:05:57.475450 containerd[1497]: time="2024-12-13T02:05:57.475434433Z" level=warning msg="cleaning up after shim disconnected" id=b89a9122753c8166e1f703835eb631d00ffc5920aca94147380ac9a0536ae398 namespace=k8s.io Dec 13 02:05:57.475450 containerd[1497]: time="2024-12-13T02:05:57.475460922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:05:58.057033 kubelet[2808]: E1213 02:05:58.056815 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:05:58.180987 containerd[1497]: time="2024-12-13T02:05:58.180857608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:06:00.058387 kubelet[2808]: E1213 02:06:00.057602 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:06:02.057913 kubelet[2808]: E1213 02:06:02.057176 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:06:03.343626 containerd[1497]: time="2024-12-13T02:06:03.343554291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:03.345075 containerd[1497]: time="2024-12-13T02:06:03.344699855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 02:06:03.345863 containerd[1497]: time="2024-12-13T02:06:03.345831721Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:03.348393 containerd[1497]: time="2024-12-13T02:06:03.348361870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:03.349355 containerd[1497]: time="2024-12-13T02:06:03.349325328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.168311163s" Dec 13 02:06:03.349586 containerd[1497]: time="2024-12-13T02:06:03.349469010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 02:06:03.355615 containerd[1497]: time="2024-12-13T02:06:03.354050381Z" level=info msg="CreateContainer within sandbox \"90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:06:03.393106 containerd[1497]: time="2024-12-13T02:06:03.393049066Z" level=info msg="CreateContainer within sandbox \"90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4\"" Dec 13 02:06:03.394971 containerd[1497]: time="2024-12-13T02:06:03.394078118Z" level=info msg="StartContainer for \"e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4\"" Dec 13 02:06:03.488279 systemd[1]: Started cri-containerd-e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4.scope - libcontainer container e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4. Dec 13 02:06:03.533722 containerd[1497]: time="2024-12-13T02:06:03.533627682Z" level=info msg="StartContainer for \"e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4\" returns successfully" Dec 13 02:06:04.057154 kubelet[2808]: E1213 02:06:04.057108 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:06:04.124767 systemd[1]: cri-containerd-e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4.scope: Deactivated successfully. Dec 13 02:06:04.169289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4-rootfs.mount: Deactivated successfully. Dec 13 02:06:04.187895 kubelet[2808]: I1213 02:06:04.187854 2808 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:06:04.191833 containerd[1497]: time="2024-12-13T02:06:04.191531797Z" level=info msg="shim disconnected" id=e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4 namespace=k8s.io Dec 13 02:06:04.191833 containerd[1497]: time="2024-12-13T02:06:04.191598123Z" level=warning msg="cleaning up after shim disconnected" id=e4c8ded74cb53f2a18a53a0fa639f7dae1762f92236e622dae4e9d5a8dd025d4 namespace=k8s.io Dec 13 02:06:04.191833 containerd[1497]: time="2024-12-13T02:06:04.191613421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:06:04.237862 kubelet[2808]: I1213 02:06:04.237821 2808 topology_manager.go:215] "Topology Admit Handler" podUID="14a5304a-ad78-4750-a5d4-86e0afbc564d" podNamespace="kube-system" podName="coredns-76f75df574-9jrk5" Dec 13 02:06:04.244845 kubelet[2808]: I1213 02:06:04.244814 2808 topology_manager.go:215] "Topology Admit Handler" podUID="6a601425-be16-4451-92a5-52bc1704e2e9" podNamespace="calico-apiserver" podName="calico-apiserver-6fc5fc95d-f8q5t" Dec 13 02:06:04.245058 kubelet[2808]: I1213 02:06:04.244963 2808 topology_manager.go:215] "Topology Admit Handler" podUID="083af0c1-68d3-4b5d-89d6-41223201f0ed" podNamespace="kube-system" podName="coredns-76f75df574-hq4w8" Dec 13 02:06:04.249702 kubelet[2808]: I1213 02:06:04.249669 2808 topology_manager.go:215] "Topology Admit Handler" podUID="54fc91ff-1a5b-460c-b104-ee7484562222" podNamespace="calico-apiserver" podName="calico-apiserver-6fc5fc95d-b9l5l" Dec 13 02:06:04.249832 kubelet[2808]: I1213 02:06:04.249800 2808 topology_manager.go:215] "Topology Admit Handler" podUID="1ecf16d6-9a75-45ea-bf2b-4a4e040aadee" podNamespace="calico-system" podName="calico-kube-controllers-6968b48768-kvc6c" Dec 13 02:06:04.272707 systemd[1]: Created slice kubepods-besteffort-pod1ecf16d6_9a75_45ea_bf2b_4a4e040aadee.slice - libcontainer container kubepods-besteffort-pod1ecf16d6_9a75_45ea_bf2b_4a4e040aadee.slice. Dec 13 02:06:04.273769 systemd[1]: Created slice kubepods-burstable-pod14a5304a_ad78_4750_a5d4_86e0afbc564d.slice - libcontainer container kubepods-burstable-pod14a5304a_ad78_4750_a5d4_86e0afbc564d.slice. Dec 13 02:06:04.284127 systemd[1]: Created slice kubepods-besteffort-pod6a601425_be16_4451_92a5_52bc1704e2e9.slice - libcontainer container kubepods-besteffort-pod6a601425_be16_4451_92a5_52bc1704e2e9.slice. Dec 13 02:06:04.295242 systemd[1]: Created slice kubepods-burstable-pod083af0c1_68d3_4b5d_89d6_41223201f0ed.slice - libcontainer container kubepods-burstable-pod083af0c1_68d3_4b5d_89d6_41223201f0ed.slice. Dec 13 02:06:04.305612 kubelet[2808]: I1213 02:06:04.305209 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7466c\" (UniqueName: \"kubernetes.io/projected/6a601425-be16-4451-92a5-52bc1704e2e9-kube-api-access-7466c\") pod \"calico-apiserver-6fc5fc95d-f8q5t\" (UID: \"6a601425-be16-4451-92a5-52bc1704e2e9\") " pod="calico-apiserver/calico-apiserver-6fc5fc95d-f8q5t" Dec 13 02:06:04.305612 kubelet[2808]: I1213 02:06:04.305250 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8z8\" (UniqueName: \"kubernetes.io/projected/54fc91ff-1a5b-460c-b104-ee7484562222-kube-api-access-ps8z8\") pod \"calico-apiserver-6fc5fc95d-b9l5l\" (UID: \"54fc91ff-1a5b-460c-b104-ee7484562222\") " pod="calico-apiserver/calico-apiserver-6fc5fc95d-b9l5l" Dec 13 02:06:04.305612 kubelet[2808]: I1213 02:06:04.305325 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a601425-be16-4451-92a5-52bc1704e2e9-calico-apiserver-certs\") pod \"calico-apiserver-6fc5fc95d-f8q5t\" (UID: \"6a601425-be16-4451-92a5-52bc1704e2e9\") " pod="calico-apiserver/calico-apiserver-6fc5fc95d-f8q5t" Dec 13 02:06:04.305612 kubelet[2808]: I1213 02:06:04.305362 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfj4v\" (UniqueName: \"kubernetes.io/projected/14a5304a-ad78-4750-a5d4-86e0afbc564d-kube-api-access-gfj4v\") pod \"coredns-76f75df574-9jrk5\" (UID: \"14a5304a-ad78-4750-a5d4-86e0afbc564d\") " pod="kube-system/coredns-76f75df574-9jrk5" Dec 13 02:06:04.305612 kubelet[2808]: I1213 02:06:04.305388 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ecf16d6-9a75-45ea-bf2b-4a4e040aadee-tigera-ca-bundle\") pod \"calico-kube-controllers-6968b48768-kvc6c\" (UID: \"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee\") " pod="calico-system/calico-kube-controllers-6968b48768-kvc6c" Dec 13 02:06:04.305820 kubelet[2808]: I1213 02:06:04.305422 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmlq\" (UniqueName: \"kubernetes.io/projected/1ecf16d6-9a75-45ea-bf2b-4a4e040aadee-kube-api-access-rjmlq\") pod \"calico-kube-controllers-6968b48768-kvc6c\" (UID: \"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee\") " pod="calico-system/calico-kube-controllers-6968b48768-kvc6c" Dec 13 02:06:04.305820 kubelet[2808]: I1213 02:06:04.305445 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14a5304a-ad78-4750-a5d4-86e0afbc564d-config-volume\") pod \"coredns-76f75df574-9jrk5\" (UID: \"14a5304a-ad78-4750-a5d4-86e0afbc564d\") " pod="kube-system/coredns-76f75df574-9jrk5" Dec 13 02:06:04.305820 kubelet[2808]: I1213 02:06:04.305467 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/54fc91ff-1a5b-460c-b104-ee7484562222-calico-apiserver-certs\") pod \"calico-apiserver-6fc5fc95d-b9l5l\" (UID: \"54fc91ff-1a5b-460c-b104-ee7484562222\") " pod="calico-apiserver/calico-apiserver-6fc5fc95d-b9l5l" Dec 13 02:06:04.306375 kubelet[2808]: I1213 02:06:04.306356 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfds2\" (UniqueName: \"kubernetes.io/projected/083af0c1-68d3-4b5d-89d6-41223201f0ed-kube-api-access-kfds2\") pod \"coredns-76f75df574-hq4w8\" (UID: \"083af0c1-68d3-4b5d-89d6-41223201f0ed\") " pod="kube-system/coredns-76f75df574-hq4w8" Dec 13 02:06:04.308285 kubelet[2808]: I1213 02:06:04.306621 2808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/083af0c1-68d3-4b5d-89d6-41223201f0ed-config-volume\") pod \"coredns-76f75df574-hq4w8\" (UID: \"083af0c1-68d3-4b5d-89d6-41223201f0ed\") " pod="kube-system/coredns-76f75df574-hq4w8" Dec 13 02:06:04.313803 systemd[1]: Created slice kubepods-besteffort-pod54fc91ff_1a5b_460c_b104_ee7484562222.slice - libcontainer container kubepods-besteffort-pod54fc91ff_1a5b_460c_b104_ee7484562222.slice. Dec 13 02:06:04.580503 containerd[1497]: time="2024-12-13T02:06:04.580341731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6968b48768-kvc6c,Uid:1ecf16d6-9a75-45ea-bf2b-4a4e040aadee,Namespace:calico-system,Attempt:0,}" Dec 13 02:06:04.581565 containerd[1497]: time="2024-12-13T02:06:04.580465195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9jrk5,Uid:14a5304a-ad78-4750-a5d4-86e0afbc564d,Namespace:kube-system,Attempt:0,}" Dec 13 02:06:04.593810 containerd[1497]: time="2024-12-13T02:06:04.593736210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-f8q5t,Uid:6a601425-be16-4451-92a5-52bc1704e2e9,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:06:04.610725 containerd[1497]: time="2024-12-13T02:06:04.610158975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hq4w8,Uid:083af0c1-68d3-4b5d-89d6-41223201f0ed,Namespace:kube-system,Attempt:0,}" Dec 13 02:06:04.620298 containerd[1497]: time="2024-12-13T02:06:04.620220682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-b9l5l,Uid:54fc91ff-1a5b-460c-b104-ee7484562222,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:06:04.974866 containerd[1497]: time="2024-12-13T02:06:04.974692816Z" level=error msg="Failed to destroy network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.979361 containerd[1497]: time="2024-12-13T02:06:04.979240343Z" level=error msg="encountered an error cleaning up failed sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.979361 containerd[1497]: time="2024-12-13T02:06:04.979311408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-f8q5t,Uid:6a601425-be16-4451-92a5-52bc1704e2e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.987572 kubelet[2808]: E1213 02:06:04.987092 2808 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.987572 kubelet[2808]: E1213 02:06:04.987220 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc5fc95d-f8q5t" Dec 13 02:06:04.987572 kubelet[2808]: E1213 02:06:04.987241 2808 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc5fc95d-f8q5t" Dec 13 02:06:04.989149 kubelet[2808]: E1213 02:06:04.989070 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc5fc95d-f8q5t_calico-apiserver(6a601425-be16-4451-92a5-52bc1704e2e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc5fc95d-f8q5t_calico-apiserver(6a601425-be16-4451-92a5-52bc1704e2e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc5fc95d-f8q5t" podUID="6a601425-be16-4451-92a5-52bc1704e2e9" Dec 13 02:06:04.993761 containerd[1497]: time="2024-12-13T02:06:04.993722526Z" level=error msg="Failed to destroy network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.994272 containerd[1497]: time="2024-12-13T02:06:04.994251058Z" level=error msg="encountered an error cleaning up failed sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.994372 containerd[1497]: time="2024-12-13T02:06:04.994353813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hq4w8,Uid:083af0c1-68d3-4b5d-89d6-41223201f0ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.994521 containerd[1497]: time="2024-12-13T02:06:04.994502365Z" level=error msg="Failed to destroy network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.995594 containerd[1497]: time="2024-12-13T02:06:04.995411239Z" level=error msg="Failed to destroy network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.996294 kubelet[2808]: E1213 02:06:04.995553 2808 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.996294 kubelet[2808]: E1213 02:06:04.995597 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hq4w8" Dec 13 02:06:04.996294 kubelet[2808]: E1213 02:06:04.995618 2808 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hq4w8" Dec 13 02:06:04.996390 containerd[1497]: time="2024-12-13T02:06:04.995876031Z" level=error msg="encountered an error cleaning up failed sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.996390 containerd[1497]: time="2024-12-13T02:06:04.995909805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9jrk5,Uid:14a5304a-ad78-4750-a5d4-86e0afbc564d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.996390 containerd[1497]: time="2024-12-13T02:06:04.996203142Z" level=error msg="encountered an error cleaning up failed sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.996390 containerd[1497]: time="2024-12-13T02:06:04.996232136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-b9l5l,Uid:54fc91ff-1a5b-460c-b104-ee7484562222,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.996574 kubelet[2808]: E1213 02:06:04.995670 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hq4w8_kube-system(083af0c1-68d3-4b5d-89d6-41223201f0ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hq4w8_kube-system(083af0c1-68d3-4b5d-89d6-41223201f0ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hq4w8" podUID="083af0c1-68d3-4b5d-89d6-41223201f0ed" Dec 13 02:06:04.996982 kubelet[2808]: E1213 02:06:04.996776 2808 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.996982 kubelet[2808]: E1213 02:06:04.996812 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc5fc95d-b9l5l" Dec 13 02:06:04.996982 kubelet[2808]: E1213 02:06:04.996829 2808 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc5fc95d-b9l5l" Dec 13 02:06:04.997107 kubelet[2808]: E1213 02:06:04.996864 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc5fc95d-b9l5l_calico-apiserver(54fc91ff-1a5b-460c-b104-ee7484562222)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc5fc95d-b9l5l_calico-apiserver(54fc91ff-1a5b-460c-b104-ee7484562222)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc5fc95d-b9l5l" podUID="54fc91ff-1a5b-460c-b104-ee7484562222" Dec 13 02:06:04.997107 kubelet[2808]: E1213 02:06:04.996890 2808 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.997107 kubelet[2808]: E1213 02:06:04.996911 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9jrk5" Dec 13 02:06:04.997267 kubelet[2808]: E1213 02:06:04.996928 2808 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9jrk5" Dec 13 02:06:04.997267 kubelet[2808]: E1213 02:06:04.996963 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9jrk5_kube-system(14a5304a-ad78-4750-a5d4-86e0afbc564d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9jrk5_kube-system(14a5304a-ad78-4750-a5d4-86e0afbc564d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9jrk5" podUID="14a5304a-ad78-4750-a5d4-86e0afbc564d" Dec 13 02:06:04.997583 containerd[1497]: time="2024-12-13T02:06:04.997562922Z" level=error msg="Failed to destroy network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.997946 containerd[1497]: time="2024-12-13T02:06:04.997881385Z" level=error msg="encountered an error cleaning up failed sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.997946 containerd[1497]: time="2024-12-13T02:06:04.997915399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6968b48768-kvc6c,Uid:1ecf16d6-9a75-45ea-bf2b-4a4e040aadee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.998265 kubelet[2808]: E1213 02:06:04.998239 2808 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:04.998301 kubelet[2808]: E1213 02:06:04.998280 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6968b48768-kvc6c" Dec 13 02:06:04.998301 kubelet[2808]: E1213 02:06:04.998297 2808 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6968b48768-kvc6c" Dec 13 02:06:04.998354 kubelet[2808]: E1213 02:06:04.998330 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6968b48768-kvc6c_calico-system(1ecf16d6-9a75-45ea-bf2b-4a4e040aadee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6968b48768-kvc6c_calico-system(1ecf16d6-9a75-45ea-bf2b-4a4e040aadee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6968b48768-kvc6c" podUID="1ecf16d6-9a75-45ea-bf2b-4a4e040aadee" Dec 13 02:06:05.223677 containerd[1497]: time="2024-12-13T02:06:05.221925581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:06:05.223920 kubelet[2808]: I1213 02:06:05.222586 2808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:05.234759 kubelet[2808]: I1213 02:06:05.233854 2808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:05.275335 containerd[1497]: time="2024-12-13T02:06:05.273849689Z" level=info msg="StopPodSandbox for \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\"" Dec 13 02:06:05.278656 containerd[1497]: time="2024-12-13T02:06:05.277718629Z" level=info msg="Ensure that sandbox aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc in task-service has been cleanup successfully" Dec 13 02:06:05.278656 containerd[1497]: time="2024-12-13T02:06:05.277963594Z" level=info msg="StopPodSandbox for \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\"" Dec 13 02:06:05.278656 containerd[1497]: time="2024-12-13T02:06:05.278282038Z" level=info msg="Ensure that sandbox 0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3 in task-service has been cleanup successfully" Dec 13 02:06:05.297351 kubelet[2808]: I1213 02:06:05.297297 2808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:05.303651 containerd[1497]: time="2024-12-13T02:06:05.302987788Z" level=info msg="StopPodSandbox for \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\"" Dec 13 02:06:05.303651 containerd[1497]: time="2024-12-13T02:06:05.303317504Z" level=info msg="Ensure that sandbox 4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0 in task-service has been cleanup successfully" Dec 13 02:06:05.311639 kubelet[2808]: I1213 02:06:05.311594 2808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:05.322151 containerd[1497]: time="2024-12-13T02:06:05.321601652Z" level=info msg="StopPodSandbox for \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\"" Dec 13 02:06:05.324152 containerd[1497]: time="2024-12-13T02:06:05.324113217Z" level=info msg="Ensure that sandbox b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab in task-service has been cleanup successfully" Dec 13 02:06:05.324392 kubelet[2808]: I1213 02:06:05.324351 2808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:05.327440 containerd[1497]: time="2024-12-13T02:06:05.327388561Z" level=info msg="StopPodSandbox for \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\"" Dec 13 02:06:05.327610 containerd[1497]: time="2024-12-13T02:06:05.327582318Z" level=info msg="Ensure that sandbox 30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff in task-service has been cleanup successfully" Dec 13 02:06:05.406365 containerd[1497]: time="2024-12-13T02:06:05.406285941Z" level=error msg="StopPodSandbox for \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\" failed" error="failed to destroy network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:05.406685 kubelet[2808]: E1213 02:06:05.406641 2808 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:05.412113 kubelet[2808]: E1213 02:06:05.411122 2808 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff"} Dec 13 02:06:05.412342 kubelet[2808]: E1213 02:06:05.412123 2808 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54fc91ff-1a5b-460c-b104-ee7484562222\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:06:05.412342 kubelet[2808]: E1213 02:06:05.412161 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54fc91ff-1a5b-460c-b104-ee7484562222\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc5fc95d-b9l5l" podUID="54fc91ff-1a5b-460c-b104-ee7484562222" Dec 13 02:06:05.414837 containerd[1497]: time="2024-12-13T02:06:05.414420283Z" level=error msg="StopPodSandbox for \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\" failed" error="failed to destroy network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:05.414901 kubelet[2808]: E1213 02:06:05.414850 2808 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:05.414901 kubelet[2808]: E1213 02:06:05.414881 2808 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3"} Dec 13 02:06:05.415944 kubelet[2808]: E1213 02:06:05.414912 2808 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a601425-be16-4451-92a5-52bc1704e2e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:06:05.415944 kubelet[2808]: E1213 02:06:05.414937 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a601425-be16-4451-92a5-52bc1704e2e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc5fc95d-f8q5t" podUID="6a601425-be16-4451-92a5-52bc1704e2e9" Dec 13 02:06:05.416178 containerd[1497]: time="2024-12-13T02:06:05.416060154Z" level=error msg="StopPodSandbox for \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\" failed" error="failed to destroy network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:05.419035 kubelet[2808]: E1213 02:06:05.416760 2808 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:05.419035 kubelet[2808]: E1213 02:06:05.416800 2808 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc"} Dec 13 02:06:05.419035 kubelet[2808]: E1213 02:06:05.416830 2808 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"083af0c1-68d3-4b5d-89d6-41223201f0ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:06:05.419035 kubelet[2808]: E1213 02:06:05.416853 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"083af0c1-68d3-4b5d-89d6-41223201f0ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hq4w8" podUID="083af0c1-68d3-4b5d-89d6-41223201f0ed" Dec 13 02:06:05.422907 containerd[1497]: time="2024-12-13T02:06:05.422832833Z" level=error msg="StopPodSandbox for \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\" failed" error="failed to destroy network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:05.423151 kubelet[2808]: E1213 02:06:05.423121 2808 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:05.423196 kubelet[2808]: E1213 02:06:05.423165 2808 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0"} Dec 13 02:06:05.423252 kubelet[2808]: E1213 02:06:05.423197 2808 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14a5304a-ad78-4750-a5d4-86e0afbc564d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:06:05.423252 kubelet[2808]: E1213 02:06:05.423225 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14a5304a-ad78-4750-a5d4-86e0afbc564d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9jrk5" podUID="14a5304a-ad78-4750-a5d4-86e0afbc564d" Dec 13 02:06:05.426104 containerd[1497]: time="2024-12-13T02:06:05.426058212Z" level=error msg="StopPodSandbox for \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\" failed" error="failed to destroy network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:05.426330 kubelet[2808]: E1213 02:06:05.426291 2808 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:05.426330 kubelet[2808]: E1213 02:06:05.426323 2808 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab"} Dec 13 02:06:05.426917 kubelet[2808]: E1213 02:06:05.426354 2808 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:06:05.426917 kubelet[2808]: E1213 02:06:05.426394 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6968b48768-kvc6c" podUID="1ecf16d6-9a75-45ea-bf2b-4a4e040aadee" Dec 13 02:06:05.434463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab-shm.mount: Deactivated successfully. Dec 13 02:06:06.070733 systemd[1]: Created slice kubepods-besteffort-podef8df160_b623_42c6_abf0_520b155176f4.slice - libcontainer container kubepods-besteffort-podef8df160_b623_42c6_abf0_520b155176f4.slice. Dec 13 02:06:06.076601 containerd[1497]: time="2024-12-13T02:06:06.076525548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r499,Uid:ef8df160-b623-42c6-abf0-520b155176f4,Namespace:calico-system,Attempt:0,}" Dec 13 02:06:06.193617 containerd[1497]: time="2024-12-13T02:06:06.193519621Z" level=error msg="Failed to destroy network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:06.194362 containerd[1497]: time="2024-12-13T02:06:06.194270947Z" level=error msg="encountered an error cleaning up failed sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:06.194413 containerd[1497]: time="2024-12-13T02:06:06.194367830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r499,Uid:ef8df160-b623-42c6-abf0-520b155176f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:06.194764 kubelet[2808]: E1213 02:06:06.194735 2808 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:06.194851 kubelet[2808]: E1213 02:06:06.194815 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r499" Dec 13 02:06:06.194891 kubelet[2808]: E1213 02:06:06.194853 2808 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r499" Dec 13 02:06:06.196047 kubelet[2808]: E1213 02:06:06.194943 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7r499_calico-system(ef8df160-b623-42c6-abf0-520b155176f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7r499_calico-system(ef8df160-b623-42c6-abf0-520b155176f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:06:06.199561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062-shm.mount: Deactivated successfully. Dec 13 02:06:06.357087 kubelet[2808]: I1213 02:06:06.356840 2808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:06.360117 containerd[1497]: time="2024-12-13T02:06:06.359062475Z" level=info msg="StopPodSandbox for \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\"" Dec 13 02:06:06.360117 containerd[1497]: time="2024-12-13T02:06:06.359431895Z" level=info msg="Ensure that sandbox 812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062 in task-service has been cleanup successfully" Dec 13 02:06:06.439503 containerd[1497]: time="2024-12-13T02:06:06.439412072Z" level=error msg="StopPodSandbox for \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\" failed" error="failed to destroy network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:06:06.440296 kubelet[2808]: E1213 02:06:06.440242 2808 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:06.440517 kubelet[2808]: E1213 02:06:06.440314 2808 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062"} Dec 13 02:06:06.440517 kubelet[2808]: E1213 02:06:06.440367 2808 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef8df160-b623-42c6-abf0-520b155176f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:06:06.440517 kubelet[2808]: E1213 02:06:06.440412 2808 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef8df160-b623-42c6-abf0-520b155176f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7r499" podUID="ef8df160-b623-42c6-abf0-520b155176f4" Dec 13 02:06:12.724769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887529266.mount: Deactivated successfully. Dec 13 02:06:12.807439 containerd[1497]: time="2024-12-13T02:06:12.807345503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 02:06:12.825181 containerd[1497]: time="2024-12-13T02:06:12.825112536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.587553946s" Dec 13 02:06:12.825417 containerd[1497]: time="2024-12-13T02:06:12.825393219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 02:06:12.856788 containerd[1497]: time="2024-12-13T02:06:12.855887129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:12.883819 containerd[1497]: time="2024-12-13T02:06:12.883785572Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:12.885024 containerd[1497]: time="2024-12-13T02:06:12.884579959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:13.024569 containerd[1497]: time="2024-12-13T02:06:13.023912937Z" level=info msg="CreateContainer within sandbox \"90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:06:13.162987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209273076.mount: Deactivated successfully. Dec 13 02:06:13.186417 containerd[1497]: time="2024-12-13T02:06:13.186362045Z" level=info msg="CreateContainer within sandbox \"90a5cc69734bb543a672ce1250a7e8302b27a01024dff530f27d94f0ff787d13\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"550ff4239d6d73315f835480efa1a53ca0f6d4e1c9873b1528cce072676515a9\"" Dec 13 02:06:13.191397 containerd[1497]: time="2024-12-13T02:06:13.191365414Z" level=info msg="StartContainer for \"550ff4239d6d73315f835480efa1a53ca0f6d4e1c9873b1528cce072676515a9\"" Dec 13 02:06:13.295676 systemd[1]: Started cri-containerd-550ff4239d6d73315f835480efa1a53ca0f6d4e1c9873b1528cce072676515a9.scope - libcontainer container 550ff4239d6d73315f835480efa1a53ca0f6d4e1c9873b1528cce072676515a9. Dec 13 02:06:13.349711 containerd[1497]: time="2024-12-13T02:06:13.349529339Z" level=info msg="StartContainer for \"550ff4239d6d73315f835480efa1a53ca0f6d4e1c9873b1528cce072676515a9\" returns successfully" Dec 13 02:06:13.604468 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:06:13.605722 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:06:14.421895 kubelet[2808]: I1213 02:06:14.421815 2808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:06:15.454150 kernel: bpftool[4053]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 02:06:15.772748 systemd-networkd[1398]: vxlan.calico: Link UP Dec 13 02:06:15.772760 systemd-networkd[1398]: vxlan.calico: Gained carrier Dec 13 02:06:16.064212 containerd[1497]: time="2024-12-13T02:06:16.062800443Z" level=info msg="StopPodSandbox for \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\"" Dec 13 02:06:16.165420 kubelet[2808]: I1213 02:06:16.164889 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-8qxk9" podStartSLOduration=4.4141245510000005 podStartE2EDuration="24.158680684s" podCreationTimestamp="2024-12-13 02:05:52 +0000 UTC" firstStartedPulling="2024-12-13 02:05:53.0902909 +0000 UTC m=+20.199172656" lastFinishedPulling="2024-12-13 02:06:12.834847033 +0000 UTC m=+39.943728789" observedRunningTime="2024-12-13 02:06:13.423099638 +0000 UTC m=+40.531981425" watchObservedRunningTime="2024-12-13 02:06:16.158680684 +0000 UTC m=+43.267562440" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.157 [INFO][4120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.158 [INFO][4120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" iface="eth0" netns="/var/run/netns/cni-af9a44b1-0e9f-eafa-d593-7f3fcb829bb4" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.158 [INFO][4120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" iface="eth0" netns="/var/run/netns/cni-af9a44b1-0e9f-eafa-d593-7f3fcb829bb4" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.160 [INFO][4120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" iface="eth0" netns="/var/run/netns/cni-af9a44b1-0e9f-eafa-d593-7f3fcb829bb4" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.160 [INFO][4120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.160 [INFO][4120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.370 [INFO][4143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.373 [INFO][4143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.374 [INFO][4143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.389 [WARNING][4143] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.389 [INFO][4143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.391 [INFO][4143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:16.406701 containerd[1497]: 2024-12-13 02:06:16.397 [INFO][4120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:16.416921 systemd[1]: run-netns-cni\x2daf9a44b1\x2d0e9f\x2deafa\x2dd593\x2d7f3fcb829bb4.mount: Deactivated successfully. Dec 13 02:06:16.424318 containerd[1497]: time="2024-12-13T02:06:16.424252947Z" level=info msg="TearDown network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\" successfully" Dec 13 02:06:16.424318 containerd[1497]: time="2024-12-13T02:06:16.424310876Z" level=info msg="StopPodSandbox for \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\" returns successfully" Dec 13 02:06:16.425572 containerd[1497]: time="2024-12-13T02:06:16.425524801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hq4w8,Uid:083af0c1-68d3-4b5d-89d6-41223201f0ed,Namespace:kube-system,Attempt:1,}" Dec 13 02:06:16.629245 systemd-networkd[1398]: cali059ff5373c7: Link UP Dec 13 02:06:16.629495 systemd-networkd[1398]: cali059ff5373c7: Gained carrier Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.524 [INFO][4156] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0 coredns-76f75df574- kube-system 083af0c1-68d3-4b5d-89d6-41223201f0ed 777 0 2024-12-13 02:05:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-3-45a43b40ef coredns-76f75df574-hq4w8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali059ff5373c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.524 [INFO][4156] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.559 [INFO][4162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" HandleID="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.569 [INFO][4162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" HandleID="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291680), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-3-45a43b40ef", "pod":"coredns-76f75df574-hq4w8", "timestamp":"2024-12-13 02:06:16.55995524 +0000 UTC"}, Hostname:"ci-4081-2-1-3-45a43b40ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.569 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.569 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.569 [INFO][4162] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-3-45a43b40ef' Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.571 [INFO][4162] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.578 [INFO][4162] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.584 [INFO][4162] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.587 [INFO][4162] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.589 [INFO][4162] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.589 [INFO][4162] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.590 [INFO][4162] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455 Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.594 [INFO][4162] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.614 [INFO][4162] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.1/26] block=192.168.61.0/26 handle="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.615 [INFO][4162] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.1/26] handle="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.615 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:16.659214 containerd[1497]: 2024-12-13 02:06:16.615 [INFO][4162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.1/26] IPv6=[] ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" HandleID="k8s-pod-network.1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.661565 containerd[1497]: 2024-12-13 02:06:16.623 [INFO][4156] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"083af0c1-68d3-4b5d-89d6-41223201f0ed", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"", Pod:"coredns-76f75df574-hq4w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali059ff5373c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:16.661565 containerd[1497]: 2024-12-13 02:06:16.623 [INFO][4156] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.1/32] ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.661565 containerd[1497]: 2024-12-13 02:06:16.623 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali059ff5373c7 ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.661565 containerd[1497]: 2024-12-13 02:06:16.630 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.661565 containerd[1497]: 2024-12-13 02:06:16.630 [INFO][4156] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"083af0c1-68d3-4b5d-89d6-41223201f0ed", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455", Pod:"coredns-76f75df574-hq4w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali059ff5373c7", MAC:"a2:88:df:65:c5:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:16.661565 containerd[1497]: 2024-12-13 02:06:16.650 [INFO][4156] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455" Namespace="kube-system" Pod="coredns-76f75df574-hq4w8" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:16.698733 containerd[1497]: time="2024-12-13T02:06:16.698461536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:16.698733 containerd[1497]: time="2024-12-13T02:06:16.698520106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:16.698733 containerd[1497]: time="2024-12-13T02:06:16.698540997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:16.698733 containerd[1497]: time="2024-12-13T02:06:16.698632611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:16.725156 systemd[1]: Started cri-containerd-1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455.scope - libcontainer container 1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455. Dec 13 02:06:16.768909 containerd[1497]: time="2024-12-13T02:06:16.768793382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hq4w8,Uid:083af0c1-68d3-4b5d-89d6-41223201f0ed,Namespace:kube-system,Attempt:1,} returns sandbox id \"1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455\"" Dec 13 02:06:16.778955 containerd[1497]: time="2024-12-13T02:06:16.778896002Z" level=info msg="CreateContainer within sandbox \"1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:06:16.799358 containerd[1497]: time="2024-12-13T02:06:16.799258360Z" level=info msg="CreateContainer within sandbox \"1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3488d89991541f84235d49798d7ff8ebe08d37b3590b3d007639bdc2ce7f6792\"" Dec 13 02:06:16.799990 containerd[1497]: time="2024-12-13T02:06:16.799950004Z" level=info msg="StartContainer for \"3488d89991541f84235d49798d7ff8ebe08d37b3590b3d007639bdc2ce7f6792\"" Dec 13 02:06:16.829481 systemd[1]: Started cri-containerd-3488d89991541f84235d49798d7ff8ebe08d37b3590b3d007639bdc2ce7f6792.scope - libcontainer container 3488d89991541f84235d49798d7ff8ebe08d37b3590b3d007639bdc2ce7f6792. Dec 13 02:06:16.873761 containerd[1497]: time="2024-12-13T02:06:16.873687435Z" level=info msg="StartContainer for \"3488d89991541f84235d49798d7ff8ebe08d37b3590b3d007639bdc2ce7f6792\" returns successfully" Dec 13 02:06:17.080561 containerd[1497]: time="2024-12-13T02:06:17.080491544Z" level=info msg="StopPodSandbox for \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\"" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.160 [INFO][4270] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.160 [INFO][4270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" iface="eth0" netns="/var/run/netns/cni-9988dcf6-1e16-748d-d05e-d29cbd141e5d" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.160 [INFO][4270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" iface="eth0" netns="/var/run/netns/cni-9988dcf6-1e16-748d-d05e-d29cbd141e5d" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.161 [INFO][4270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" iface="eth0" netns="/var/run/netns/cni-9988dcf6-1e16-748d-d05e-d29cbd141e5d" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.161 [INFO][4270] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.161 [INFO][4270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.203 [INFO][4280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.203 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.203 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.209 [WARNING][4280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.209 [INFO][4280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.210 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:17.216538 containerd[1497]: 2024-12-13 02:06:17.213 [INFO][4270] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:17.216538 containerd[1497]: time="2024-12-13T02:06:17.216571808Z" level=info msg="TearDown network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\" successfully" Dec 13 02:06:17.217187 containerd[1497]: time="2024-12-13T02:06:17.216600101Z" level=info msg="StopPodSandbox for \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\" returns successfully" Dec 13 02:06:17.217769 containerd[1497]: time="2024-12-13T02:06:17.217734946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r499,Uid:ef8df160-b623-42c6-abf0-520b155176f4,Namespace:calico-system,Attempt:1,}" Dec 13 02:06:17.348289 systemd-networkd[1398]: cali4f276ea8314: Link UP Dec 13 02:06:17.350208 systemd-networkd[1398]: cali4f276ea8314: Gained carrier Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.268 [INFO][4286] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0 csi-node-driver- calico-system ef8df160-b623-42c6-abf0-520b155176f4 786 0 2024-12-13 02:05:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-3-45a43b40ef csi-node-driver-7r499 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4f276ea8314 [] []}} ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.268 [INFO][4286] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.300 [INFO][4298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" HandleID="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.309 [INFO][4298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" HandleID="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-3-45a43b40ef", "pod":"csi-node-driver-7r499", "timestamp":"2024-12-13 02:06:17.300705555 +0000 UTC"}, Hostname:"ci-4081-2-1-3-45a43b40ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.310 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.310 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.310 [INFO][4298] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-3-45a43b40ef' Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.311 [INFO][4298] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.316 [INFO][4298] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.320 [INFO][4298] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.326 [INFO][4298] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.328 [INFO][4298] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.328 [INFO][4298] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.329 [INFO][4298] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1 Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.334 [INFO][4298] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.341 [INFO][4298] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.2/26] block=192.168.61.0/26 handle="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.341 [INFO][4298] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.2/26] handle="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.341 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:17.368505 containerd[1497]: 2024-12-13 02:06:17.341 [INFO][4298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.2/26] IPv6=[] ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" HandleID="k8s-pod-network.7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.372139 containerd[1497]: 2024-12-13 02:06:17.344 [INFO][4286] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef8df160-b623-42c6-abf0-520b155176f4", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"", Pod:"csi-node-driver-7r499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f276ea8314", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:17.372139 containerd[1497]: 2024-12-13 02:06:17.344 [INFO][4286] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.2/32] ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.372139 containerd[1497]: 2024-12-13 02:06:17.344 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f276ea8314 ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.372139 containerd[1497]: 2024-12-13 02:06:17.349 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.372139 containerd[1497]: 2024-12-13 02:06:17.350 [INFO][4286] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef8df160-b623-42c6-abf0-520b155176f4", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1", Pod:"csi-node-driver-7r499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f276ea8314", MAC:"0a:fc:48:62:aa:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:17.372139 containerd[1497]: 2024-12-13 02:06:17.362 [INFO][4286] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1" Namespace="calico-system" Pod="csi-node-driver-7r499" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:17.419722 systemd[1]: run-netns-cni\x2d9988dcf6\x2d1e16\x2d748d\x2dd05e\x2dd29cbd141e5d.mount: Deactivated successfully. Dec 13 02:06:17.441527 containerd[1497]: time="2024-12-13T02:06:17.439813568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:17.441527 containerd[1497]: time="2024-12-13T02:06:17.439875065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:17.441527 containerd[1497]: time="2024-12-13T02:06:17.439918578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:17.441527 containerd[1497]: time="2024-12-13T02:06:17.440056530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:17.472458 kubelet[2808]: I1213 02:06:17.472411 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hq4w8" podStartSLOduration=31.472349281 podStartE2EDuration="31.472349281s" podCreationTimestamp="2024-12-13 02:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:06:17.471356226 +0000 UTC m=+44.580237983" watchObservedRunningTime="2024-12-13 02:06:17.472349281 +0000 UTC m=+44.581231038" Dec 13 02:06:17.473518 systemd[1]: Started cri-containerd-7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1.scope - libcontainer container 7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1. Dec 13 02:06:17.522723 containerd[1497]: time="2024-12-13T02:06:17.522374199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r499,Uid:ef8df160-b623-42c6-abf0-520b155176f4,Namespace:calico-system,Attempt:1,} returns sandbox id \"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1\"" Dec 13 02:06:17.534568 containerd[1497]: time="2024-12-13T02:06:17.534513988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:06:17.798689 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Dec 13 02:06:17.862537 systemd-networkd[1398]: cali059ff5373c7: Gained IPv6LL Dec 13 02:06:18.060081 containerd[1497]: time="2024-12-13T02:06:18.059152742Z" level=info msg="StopPodSandbox for \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\"" Dec 13 02:06:18.060699 containerd[1497]: time="2024-12-13T02:06:18.060662579Z" level=info msg="StopPodSandbox for \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\"" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.160 [INFO][4389] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.160 [INFO][4389] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" iface="eth0" netns="/var/run/netns/cni-f426dc62-d258-2d92-200e-496444eed569" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.161 [INFO][4389] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" iface="eth0" netns="/var/run/netns/cni-f426dc62-d258-2d92-200e-496444eed569" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.162 [INFO][4389] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" iface="eth0" netns="/var/run/netns/cni-f426dc62-d258-2d92-200e-496444eed569" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.162 [INFO][4389] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.162 [INFO][4389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.199 [INFO][4402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.199 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.199 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.205 [WARNING][4402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.205 [INFO][4402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.207 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:18.213120 containerd[1497]: 2024-12-13 02:06:18.210 [INFO][4389] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:18.219863 containerd[1497]: time="2024-12-13T02:06:18.214260759Z" level=info msg="TearDown network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\" successfully" Dec 13 02:06:18.219863 containerd[1497]: time="2024-12-13T02:06:18.214289202Z" level=info msg="StopPodSandbox for \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\" returns successfully" Dec 13 02:06:18.219863 containerd[1497]: time="2024-12-13T02:06:18.218278226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9jrk5,Uid:14a5304a-ad78-4750-a5d4-86e0afbc564d,Namespace:kube-system,Attempt:1,}" Dec 13 02:06:18.221259 systemd[1]: run-netns-cni\x2df426dc62\x2dd258\x2d2d92\x2d200e\x2d496444eed569.mount: Deactivated successfully. Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.177 [INFO][4390] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.177 [INFO][4390] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" iface="eth0" netns="/var/run/netns/cni-ae3cc3b1-d226-09e5-c957-4db7f183c17f" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.178 [INFO][4390] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" iface="eth0" netns="/var/run/netns/cni-ae3cc3b1-d226-09e5-c957-4db7f183c17f" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.180 [INFO][4390] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" iface="eth0" netns="/var/run/netns/cni-ae3cc3b1-d226-09e5-c957-4db7f183c17f" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.180 [INFO][4390] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.180 [INFO][4390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.212 [INFO][4407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.212 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.212 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.225 [WARNING][4407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.225 [INFO][4407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.227 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:18.235684 containerd[1497]: 2024-12-13 02:06:18.230 [INFO][4390] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:18.238719 containerd[1497]: time="2024-12-13T02:06:18.238196907Z" level=info msg="TearDown network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\" successfully" Dec 13 02:06:18.238719 containerd[1497]: time="2024-12-13T02:06:18.238226011Z" level=info msg="StopPodSandbox for \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\" returns successfully" Dec 13 02:06:18.238987 containerd[1497]: time="2024-12-13T02:06:18.238968892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6968b48768-kvc6c,Uid:1ecf16d6-9a75-45ea-bf2b-4a4e040aadee,Namespace:calico-system,Attempt:1,}" Dec 13 02:06:18.240353 systemd[1]: run-netns-cni\x2dae3cc3b1\x2dd226\x2d09e5\x2dc957\x2d4db7f183c17f.mount: Deactivated successfully. Dec 13 02:06:18.381588 systemd-networkd[1398]: calicb2ea961f68: Link UP Dec 13 02:06:18.383531 systemd-networkd[1398]: calicb2ea961f68: Gained carrier Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.301 [INFO][4414] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0 calico-kube-controllers-6968b48768- calico-system 1ecf16d6-9a75-45ea-bf2b-4a4e040aadee 805 0 2024-12-13 02:05:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6968b48768 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-3-45a43b40ef calico-kube-controllers-6968b48768-kvc6c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb2ea961f68 [] []}} ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.303 [INFO][4414] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.342 [INFO][4439] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" HandleID="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.351 [INFO][4439] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" HandleID="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332f50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-3-45a43b40ef", "pod":"calico-kube-controllers-6968b48768-kvc6c", "timestamp":"2024-12-13 02:06:18.342210428 +0000 UTC"}, Hostname:"ci-4081-2-1-3-45a43b40ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.351 [INFO][4439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.351 [INFO][4439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.351 [INFO][4439] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-3-45a43b40ef' Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.353 [INFO][4439] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.357 [INFO][4439] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.361 [INFO][4439] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.363 [INFO][4439] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.365 [INFO][4439] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.365 [INFO][4439] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.366 [INFO][4439] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.370 [INFO][4439] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.375 [INFO][4439] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.3/26] block=192.168.61.0/26 handle="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.375 [INFO][4439] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.3/26] handle="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.375 [INFO][4439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:18.404402 containerd[1497]: 2024-12-13 02:06:18.375 [INFO][4439] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.3/26] IPv6=[] ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" HandleID="k8s-pod-network.8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.405930 containerd[1497]: 2024-12-13 02:06:18.378 [INFO][4414] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0", GenerateName:"calico-kube-controllers-6968b48768-", Namespace:"calico-system", SelfLink:"", UID:"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6968b48768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"", Pod:"calico-kube-controllers-6968b48768-kvc6c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2ea961f68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:18.405930 containerd[1497]: 2024-12-13 02:06:18.378 [INFO][4414] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.3/32] ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.405930 containerd[1497]: 2024-12-13 02:06:18.378 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb2ea961f68 ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.405930 containerd[1497]: 2024-12-13 02:06:18.383 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.405930 containerd[1497]: 2024-12-13 02:06:18.383 [INFO][4414] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0", GenerateName:"calico-kube-controllers-6968b48768-", Namespace:"calico-system", SelfLink:"", UID:"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6968b48768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a", Pod:"calico-kube-controllers-6968b48768-kvc6c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2ea961f68", MAC:"ea:73:8c:4b:ef:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:18.405930 containerd[1497]: 2024-12-13 02:06:18.396 [INFO][4414] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a" Namespace="calico-system" Pod="calico-kube-controllers-6968b48768-kvc6c" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:18.456031 systemd-networkd[1398]: calid032e899688: Link UP Dec 13 02:06:18.457269 systemd-networkd[1398]: calid032e899688: Gained carrier Dec 13 02:06:18.467322 containerd[1497]: time="2024-12-13T02:06:18.466080362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:18.472575 containerd[1497]: time="2024-12-13T02:06:18.467342578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:18.472575 containerd[1497]: time="2024-12-13T02:06:18.468578355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:18.472575 containerd[1497]: time="2024-12-13T02:06:18.472204821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.310 [INFO][4426] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0 coredns-76f75df574- kube-system 14a5304a-ad78-4750-a5d4-86e0afbc564d 803 0 2024-12-13 02:05:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-3-45a43b40ef coredns-76f75df574-9jrk5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid032e899688 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.310 [INFO][4426] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.344 [INFO][4440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" HandleID="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.354 [INFO][4440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" HandleID="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-3-45a43b40ef", "pod":"coredns-76f75df574-9jrk5", "timestamp":"2024-12-13 02:06:18.344713531 +0000 UTC"}, Hostname:"ci-4081-2-1-3-45a43b40ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.354 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.375 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.375 [INFO][4440] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-3-45a43b40ef' Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.378 [INFO][4440] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.386 [INFO][4440] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.396 [INFO][4440] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.403 [INFO][4440] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.408 [INFO][4440] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.408 [INFO][4440] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.413 [INFO][4440] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.421 [INFO][4440] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.430 [INFO][4440] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.4/26] block=192.168.61.0/26 handle="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.430 [INFO][4440] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.4/26] handle="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.430 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:18.510926 containerd[1497]: 2024-12-13 02:06:18.430 [INFO][4440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.4/26] IPv6=[] ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" HandleID="k8s-pod-network.abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.511573 containerd[1497]: 2024-12-13 02:06:18.434 [INFO][4426] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a5304a-ad78-4750-a5d4-86e0afbc564d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"", Pod:"coredns-76f75df574-9jrk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid032e899688", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:18.511573 containerd[1497]: 2024-12-13 02:06:18.434 [INFO][4426] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.4/32] ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.511573 containerd[1497]: 2024-12-13 02:06:18.434 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid032e899688 ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.511573 containerd[1497]: 2024-12-13 02:06:18.457 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.511573 containerd[1497]: 2024-12-13 02:06:18.459 [INFO][4426] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a5304a-ad78-4750-a5d4-86e0afbc564d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de", Pod:"coredns-76f75df574-9jrk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid032e899688", MAC:"4e:2a:ad:e4:f8:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:18.511573 containerd[1497]: 2024-12-13 02:06:18.486 [INFO][4426] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de" Namespace="kube-system" Pod="coredns-76f75df574-9jrk5" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:18.515225 systemd[1]: Started cri-containerd-8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a.scope - libcontainer container 8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a. Dec 13 02:06:18.552031 containerd[1497]: time="2024-12-13T02:06:18.551364893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:18.553309 containerd[1497]: time="2024-12-13T02:06:18.551451577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:18.553309 containerd[1497]: time="2024-12-13T02:06:18.553096030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:18.553458 containerd[1497]: time="2024-12-13T02:06:18.553378266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:18.594151 containerd[1497]: time="2024-12-13T02:06:18.593822067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6968b48768-kvc6c,Uid:1ecf16d6-9a75-45ea-bf2b-4a4e040aadee,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a\"" Dec 13 02:06:18.604154 systemd[1]: Started cri-containerd-abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de.scope - libcontainer container abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de. Dec 13 02:06:18.646248 containerd[1497]: time="2024-12-13T02:06:18.646003285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9jrk5,Uid:14a5304a-ad78-4750-a5d4-86e0afbc564d,Namespace:kube-system,Attempt:1,} returns sandbox id \"abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de\"" Dec 13 02:06:18.651569 containerd[1497]: time="2024-12-13T02:06:18.651325550Z" level=info msg="CreateContainer within sandbox \"abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:06:18.667332 containerd[1497]: time="2024-12-13T02:06:18.667305491Z" level=info msg="CreateContainer within sandbox \"abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e098968da159a5d781bf9bd8292ec3e3d44ca85ac872dbfcbdbdc977a6a69274\"" Dec 13 02:06:18.668907 containerd[1497]: time="2024-12-13T02:06:18.668198307Z" level=info msg="StartContainer for \"e098968da159a5d781bf9bd8292ec3e3d44ca85ac872dbfcbdbdc977a6a69274\"" Dec 13 02:06:18.696171 systemd[1]: Started cri-containerd-e098968da159a5d781bf9bd8292ec3e3d44ca85ac872dbfcbdbdc977a6a69274.scope - libcontainer container e098968da159a5d781bf9bd8292ec3e3d44ca85ac872dbfcbdbdc977a6a69274. Dec 13 02:06:18.727105 containerd[1497]: time="2024-12-13T02:06:18.727062543Z" level=info msg="StartContainer for \"e098968da159a5d781bf9bd8292ec3e3d44ca85ac872dbfcbdbdc977a6a69274\" returns successfully" Dec 13 02:06:18.950461 systemd-networkd[1398]: cali4f276ea8314: Gained IPv6LL Dec 13 02:06:19.060037 containerd[1497]: time="2024-12-13T02:06:19.059358594Z" level=info msg="StopPodSandbox for \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\"" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.182 [INFO][4612] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.182 [INFO][4612] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" iface="eth0" netns="/var/run/netns/cni-64f49b14-c6a2-b6e1-f260-822c2556fa0e" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.183 [INFO][4612] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" iface="eth0" netns="/var/run/netns/cni-64f49b14-c6a2-b6e1-f260-822c2556fa0e" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.183 [INFO][4612] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" iface="eth0" netns="/var/run/netns/cni-64f49b14-c6a2-b6e1-f260-822c2556fa0e" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.183 [INFO][4612] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.183 [INFO][4612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.229 [INFO][4618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.229 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.230 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.243 [WARNING][4618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.243 [INFO][4618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.244 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:19.308135 containerd[1497]: 2024-12-13 02:06:19.250 [INFO][4612] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:19.309838 containerd[1497]: time="2024-12-13T02:06:19.308322500Z" level=info msg="TearDown network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\" successfully" Dec 13 02:06:19.309838 containerd[1497]: time="2024-12-13T02:06:19.308351244Z" level=info msg="StopPodSandbox for \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\" returns successfully" Dec 13 02:06:19.311097 containerd[1497]: time="2024-12-13T02:06:19.310176650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-b9l5l,Uid:54fc91ff-1a5b-460c-b104-ee7484562222,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:06:19.415026 systemd[1]: run-containerd-runc-k8s.io-abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de-runc.ZMcUHp.mount: Deactivated successfully. Dec 13 02:06:19.415175 systemd[1]: run-netns-cni\x2d64f49b14\x2dc6a2\x2db6e1\x2df260\x2d822c2556fa0e.mount: Deactivated successfully. Dec 13 02:06:19.463322 containerd[1497]: time="2024-12-13T02:06:19.463087999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:19.465001 containerd[1497]: time="2024-12-13T02:06:19.464972218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 02:06:19.467004 containerd[1497]: time="2024-12-13T02:06:19.466983707Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:19.470355 containerd[1497]: time="2024-12-13T02:06:19.470221064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:19.471918 containerd[1497]: time="2024-12-13T02:06:19.471894783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.937332712s" Dec 13 02:06:19.472282 containerd[1497]: time="2024-12-13T02:06:19.472002677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 02:06:19.473843 containerd[1497]: time="2024-12-13T02:06:19.473673490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:06:19.475272 containerd[1497]: time="2024-12-13T02:06:19.475150735Z" level=info msg="CreateContainer within sandbox \"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:06:19.485417 kubelet[2808]: I1213 02:06:19.484722 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9jrk5" podStartSLOduration=33.484674599 podStartE2EDuration="33.484674599s" podCreationTimestamp="2024-12-13 02:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:06:19.482762879 +0000 UTC m=+46.591644656" watchObservedRunningTime="2024-12-13 02:06:19.484674599 +0000 UTC m=+46.593556356" Dec 13 02:06:19.491580 systemd-networkd[1398]: calidc46127ec77: Link UP Dec 13 02:06:19.496158 systemd-networkd[1398]: calidc46127ec77: Gained carrier Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.371 [INFO][4628] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0 calico-apiserver-6fc5fc95d- calico-apiserver 54fc91ff-1a5b-460c-b104-ee7484562222 817 0 2024-12-13 02:05:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc5fc95d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-3-45a43b40ef calico-apiserver-6fc5fc95d-b9l5l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc46127ec77 [] []}} ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.371 [INFO][4628] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.406 [INFO][4642] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" HandleID="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.423 [INFO][4642] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" HandleID="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-3-45a43b40ef", "pod":"calico-apiserver-6fc5fc95d-b9l5l", "timestamp":"2024-12-13 02:06:19.40600456 +0000 UTC"}, Hostname:"ci-4081-2-1-3-45a43b40ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.424 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.424 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.424 [INFO][4642] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-3-45a43b40ef' Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.425 [INFO][4642] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.432 [INFO][4642] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.437 [INFO][4642] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.439 [INFO][4642] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.442 [INFO][4642] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.442 [INFO][4642] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.443 [INFO][4642] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62 Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.448 [INFO][4642] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.457 [INFO][4642] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.5/26] block=192.168.61.0/26 handle="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.458 [INFO][4642] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.5/26] handle="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.458 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:19.528256 containerd[1497]: 2024-12-13 02:06:19.458 [INFO][4642] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.5/26] IPv6=[] ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" HandleID="k8s-pod-network.fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.532328 containerd[1497]: 2024-12-13 02:06:19.469 [INFO][4628] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"54fc91ff-1a5b-460c-b104-ee7484562222", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"", Pod:"calico-apiserver-6fc5fc95d-b9l5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc46127ec77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:19.532328 containerd[1497]: 2024-12-13 02:06:19.469 [INFO][4628] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.5/32] ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.532328 containerd[1497]: 2024-12-13 02:06:19.469 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc46127ec77 ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.532328 containerd[1497]: 2024-12-13 02:06:19.494 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.532328 containerd[1497]: 2024-12-13 02:06:19.495 [INFO][4628] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"54fc91ff-1a5b-460c-b104-ee7484562222", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62", Pod:"calico-apiserver-6fc5fc95d-b9l5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc46127ec77", MAC:"96:77:f9:c2:51:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:19.532328 containerd[1497]: 2024-12-13 02:06:19.518 [INFO][4628] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-b9l5l" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:19.556947 containerd[1497]: time="2024-12-13T02:06:19.556761418Z" level=info msg="CreateContainer within sandbox \"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fcc5f8d3374fefa5932c38311da28b769a26b69ea37e5b3cfd1f84611c5314b6\"" Dec 13 02:06:19.573635 containerd[1497]: time="2024-12-13T02:06:19.570643207Z" level=info msg="StartContainer for \"fcc5f8d3374fefa5932c38311da28b769a26b69ea37e5b3cfd1f84611c5314b6\"" Dec 13 02:06:19.636362 containerd[1497]: time="2024-12-13T02:06:19.635630305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:19.636362 containerd[1497]: time="2024-12-13T02:06:19.635679458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:19.636362 containerd[1497]: time="2024-12-13T02:06:19.635702693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:19.636362 containerd[1497]: time="2024-12-13T02:06:19.635778096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:19.647372 systemd[1]: Started cri-containerd-fcc5f8d3374fefa5932c38311da28b769a26b69ea37e5b3cfd1f84611c5314b6.scope - libcontainer container fcc5f8d3374fefa5932c38311da28b769a26b69ea37e5b3cfd1f84611c5314b6. Dec 13 02:06:19.666193 systemd[1]: Started cri-containerd-fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62.scope - libcontainer container fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62. Dec 13 02:06:19.691037 containerd[1497]: time="2024-12-13T02:06:19.690681086Z" level=info msg="StartContainer for \"fcc5f8d3374fefa5932c38311da28b769a26b69ea37e5b3cfd1f84611c5314b6\" returns successfully" Dec 13 02:06:19.718465 containerd[1497]: time="2024-12-13T02:06:19.718405450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-b9l5l,Uid:54fc91ff-1a5b-460c-b104-ee7484562222,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62\"" Dec 13 02:06:20.039212 systemd-networkd[1398]: calid032e899688: Gained IPv6LL Dec 13 02:06:20.061349 containerd[1497]: time="2024-12-13T02:06:20.058452631Z" level=info msg="StopPodSandbox for \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\"" Dec 13 02:06:20.103145 systemd-networkd[1398]: calicb2ea961f68: Gained IPv6LL Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.149 [INFO][4757] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.149 [INFO][4757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" iface="eth0" netns="/var/run/netns/cni-c005a3d9-6853-d384-cf86-58c44eeb0fc3" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.149 [INFO][4757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" iface="eth0" netns="/var/run/netns/cni-c005a3d9-6853-d384-cf86-58c44eeb0fc3" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.149 [INFO][4757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" iface="eth0" netns="/var/run/netns/cni-c005a3d9-6853-d384-cf86-58c44eeb0fc3" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.150 [INFO][4757] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.150 [INFO][4757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.188 [INFO][4764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.188 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.188 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.196 [WARNING][4764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.196 [INFO][4764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.199 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:20.205548 containerd[1497]: 2024-12-13 02:06:20.202 [INFO][4757] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:20.207777 containerd[1497]: time="2024-12-13T02:06:20.206104718Z" level=info msg="TearDown network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\" successfully" Dec 13 02:06:20.207777 containerd[1497]: time="2024-12-13T02:06:20.206136107Z" level=info msg="StopPodSandbox for \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\" returns successfully" Dec 13 02:06:20.207777 containerd[1497]: time="2024-12-13T02:06:20.206870993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-f8q5t,Uid:6a601425-be16-4451-92a5-52bc1704e2e9,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:06:20.355900 systemd-networkd[1398]: calidcd63da416b: Link UP Dec 13 02:06:20.358971 systemd-networkd[1398]: calidcd63da416b: Gained carrier Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.264 [INFO][4771] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0 calico-apiserver-6fc5fc95d- calico-apiserver 6a601425-be16-4451-92a5-52bc1704e2e9 837 0 2024-12-13 02:05:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc5fc95d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-3-45a43b40ef calico-apiserver-6fc5fc95d-f8q5t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidcd63da416b [] []}} ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.265 [INFO][4771] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.303 [INFO][4781] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" HandleID="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.313 [INFO][4781] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" HandleID="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-3-45a43b40ef", "pod":"calico-apiserver-6fc5fc95d-f8q5t", "timestamp":"2024-12-13 02:06:20.303247432 +0000 UTC"}, Hostname:"ci-4081-2-1-3-45a43b40ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.313 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.313 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.314 [INFO][4781] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-3-45a43b40ef' Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.316 [INFO][4781] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.320 [INFO][4781] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.325 [INFO][4781] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.328 [INFO][4781] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.331 [INFO][4781] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.331 [INFO][4781] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.333 [INFO][4781] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465 Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.337 [INFO][4781] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.346 [INFO][4781] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.6/26] block=192.168.61.0/26 handle="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.346 [INFO][4781] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.6/26] handle="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" host="ci-4081-2-1-3-45a43b40ef" Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.346 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:20.380116 containerd[1497]: 2024-12-13 02:06:20.346 [INFO][4781] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.6/26] IPv6=[] ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" HandleID="k8s-pod-network.d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.385241 containerd[1497]: 2024-12-13 02:06:20.350 [INFO][4771] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a601425-be16-4451-92a5-52bc1704e2e9", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"", Pod:"calico-apiserver-6fc5fc95d-f8q5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd63da416b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:20.385241 containerd[1497]: 2024-12-13 02:06:20.350 [INFO][4771] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.6/32] ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.385241 containerd[1497]: 2024-12-13 02:06:20.350 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcd63da416b ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.385241 containerd[1497]: 2024-12-13 02:06:20.356 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.385241 containerd[1497]: 2024-12-13 02:06:20.359 [INFO][4771] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a601425-be16-4451-92a5-52bc1704e2e9", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465", Pod:"calico-apiserver-6fc5fc95d-f8q5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd63da416b", MAC:"d6:ea:e7:74:42:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:20.385241 containerd[1497]: 2024-12-13 02:06:20.373 [INFO][4771] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465" Namespace="calico-apiserver" Pod="calico-apiserver-6fc5fc95d-f8q5t" WorkloadEndpoint="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:20.418520 systemd[1]: run-netns-cni\x2dc005a3d9\x2d6853\x2dd384\x2dcf86\x2d58c44eeb0fc3.mount: Deactivated successfully. Dec 13 02:06:20.420840 containerd[1497]: time="2024-12-13T02:06:20.420705244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:20.421541 containerd[1497]: time="2024-12-13T02:06:20.421344418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:20.421541 containerd[1497]: time="2024-12-13T02:06:20.421362972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:20.421541 containerd[1497]: time="2024-12-13T02:06:20.421463794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:20.450187 systemd[1]: run-containerd-runc-k8s.io-d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465-runc.RVcjLC.mount: Deactivated successfully. Dec 13 02:06:20.459232 systemd[1]: Started cri-containerd-d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465.scope - libcontainer container d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465. Dec 13 02:06:20.507620 containerd[1497]: time="2024-12-13T02:06:20.507555183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc5fc95d-f8q5t,Uid:6a601425-be16-4451-92a5-52bc1704e2e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465\"" Dec 13 02:06:20.742348 systemd-networkd[1398]: calidc46127ec77: Gained IPv6LL Dec 13 02:06:21.913465 kubelet[2808]: I1213 02:06:21.913405 2808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:06:21.998723 systemd[1]: run-containerd-runc-k8s.io-550ff4239d6d73315f835480efa1a53ca0f6d4e1c9873b1528cce072676515a9-runc.wAQfQT.mount: Deactivated successfully. Dec 13 02:06:22.023197 systemd-networkd[1398]: calidcd63da416b: Gained IPv6LL Dec 13 02:06:22.169411 systemd[1]: run-containerd-runc-k8s.io-550ff4239d6d73315f835480efa1a53ca0f6d4e1c9873b1528cce072676515a9-runc.Lk2z9Y.mount: Deactivated successfully. Dec 13 02:06:22.505364 containerd[1497]: time="2024-12-13T02:06:22.505325035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:22.511363 containerd[1497]: time="2024-12-13T02:06:22.511282819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 02:06:22.514559 containerd[1497]: time="2024-12-13T02:06:22.514481975Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:22.518994 containerd[1497]: time="2024-12-13T02:06:22.518880318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:22.519912 containerd[1497]: time="2024-12-13T02:06:22.519861251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.046156782s" Dec 13 02:06:22.519966 containerd[1497]: time="2024-12-13T02:06:22.519914572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 02:06:22.521851 containerd[1497]: time="2024-12-13T02:06:22.521818326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:06:22.539836 containerd[1497]: time="2024-12-13T02:06:22.539771061Z" level=info msg="CreateContainer within sandbox \"8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:06:22.562475 containerd[1497]: time="2024-12-13T02:06:22.562421596Z" level=info msg="CreateContainer within sandbox \"8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"00745217febf2e638a94d5b1a28530309b235cc8e4342d5d636f86b0844d967a\"" Dec 13 02:06:22.563349 containerd[1497]: time="2024-12-13T02:06:22.563222206Z" level=info msg="StartContainer for \"00745217febf2e638a94d5b1a28530309b235cc8e4342d5d636f86b0844d967a\"" Dec 13 02:06:22.595151 systemd[1]: Started cri-containerd-00745217febf2e638a94d5b1a28530309b235cc8e4342d5d636f86b0844d967a.scope - libcontainer container 00745217febf2e638a94d5b1a28530309b235cc8e4342d5d636f86b0844d967a. Dec 13 02:06:22.642504 containerd[1497]: time="2024-12-13T02:06:22.642392286Z" level=info msg="StartContainer for \"00745217febf2e638a94d5b1a28530309b235cc8e4342d5d636f86b0844d967a\" returns successfully" Dec 13 02:06:23.515628 kubelet[2808]: I1213 02:06:23.515337 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6968b48768-kvc6c" podStartSLOduration=27.590449555 podStartE2EDuration="31.515296124s" podCreationTimestamp="2024-12-13 02:05:52 +0000 UTC" firstStartedPulling="2024-12-13 02:06:18.595434689 +0000 UTC m=+45.704316446" lastFinishedPulling="2024-12-13 02:06:22.520281259 +0000 UTC m=+49.629163015" observedRunningTime="2024-12-13 02:06:23.514476168 +0000 UTC m=+50.623357925" watchObservedRunningTime="2024-12-13 02:06:23.515296124 +0000 UTC m=+50.624177882" Dec 13 02:06:24.244661 containerd[1497]: time="2024-12-13T02:06:24.244604722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:24.246204 containerd[1497]: time="2024-12-13T02:06:24.246160938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 02:06:24.247795 containerd[1497]: time="2024-12-13T02:06:24.247719457Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:24.250489 containerd[1497]: time="2024-12-13T02:06:24.250436649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:24.251489 containerd[1497]: time="2024-12-13T02:06:24.251196451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.729343679s" Dec 13 02:06:24.251489 containerd[1497]: time="2024-12-13T02:06:24.251246857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 02:06:24.252172 containerd[1497]: time="2024-12-13T02:06:24.251875821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:06:24.258217 containerd[1497]: time="2024-12-13T02:06:24.258160347Z" level=info msg="CreateContainer within sandbox \"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:06:24.287661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716319786.mount: Deactivated successfully. Dec 13 02:06:24.290735 containerd[1497]: time="2024-12-13T02:06:24.289369271Z" level=info msg="CreateContainer within sandbox \"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cf1f4a091d4008436509f98ff9b17af749d9f7db9f8094e5af890a5693d3490e\"" Dec 13 02:06:24.291049 containerd[1497]: time="2024-12-13T02:06:24.290996672Z" level=info msg="StartContainer for \"cf1f4a091d4008436509f98ff9b17af749d9f7db9f8094e5af890a5693d3490e\"" Dec 13 02:06:24.335193 systemd[1]: Started cri-containerd-cf1f4a091d4008436509f98ff9b17af749d9f7db9f8094e5af890a5693d3490e.scope - libcontainer container cf1f4a091d4008436509f98ff9b17af749d9f7db9f8094e5af890a5693d3490e. Dec 13 02:06:24.376617 containerd[1497]: time="2024-12-13T02:06:24.376406897Z" level=info msg="StartContainer for \"cf1f4a091d4008436509f98ff9b17af749d9f7db9f8094e5af890a5693d3490e\" returns successfully" Dec 13 02:06:25.337527 kubelet[2808]: I1213 02:06:25.337459 2808 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:06:25.338674 kubelet[2808]: I1213 02:06:25.338639 2808 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:06:27.240778 containerd[1497]: time="2024-12-13T02:06:27.240718508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:27.242063 containerd[1497]: time="2024-12-13T02:06:27.242006205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 02:06:27.243712 containerd[1497]: time="2024-12-13T02:06:27.243639336Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:27.258053 containerd[1497]: time="2024-12-13T02:06:27.257734460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:27.261976 containerd[1497]: time="2024-12-13T02:06:27.261945559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.010042025s" Dec 13 02:06:27.262153 containerd[1497]: time="2024-12-13T02:06:27.262137513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:06:27.263147 containerd[1497]: time="2024-12-13T02:06:27.263105041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:06:27.264701 containerd[1497]: time="2024-12-13T02:06:27.264671045Z" level=info msg="CreateContainer within sandbox \"fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:06:27.290535 containerd[1497]: time="2024-12-13T02:06:27.290374509Z" level=info msg="CreateContainer within sandbox \"fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32aa3fa19a75b70c1caf6e98d40cd2a77aab4f4abbb2db8f65c7c87ed495783c\"" Dec 13 02:06:27.291502 containerd[1497]: time="2024-12-13T02:06:27.291322420Z" level=info msg="StartContainer for \"32aa3fa19a75b70c1caf6e98d40cd2a77aab4f4abbb2db8f65c7c87ed495783c\"" Dec 13 02:06:27.343167 systemd[1]: Started cri-containerd-32aa3fa19a75b70c1caf6e98d40cd2a77aab4f4abbb2db8f65c7c87ed495783c.scope - libcontainer container 32aa3fa19a75b70c1caf6e98d40cd2a77aab4f4abbb2db8f65c7c87ed495783c. Dec 13 02:06:27.393186 containerd[1497]: time="2024-12-13T02:06:27.391774275Z" level=info msg="StartContainer for \"32aa3fa19a75b70c1caf6e98d40cd2a77aab4f4abbb2db8f65c7c87ed495783c\" returns successfully" Dec 13 02:06:27.533627 kubelet[2808]: I1213 02:06:27.533154 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7r499" podStartSLOduration=28.805412864 podStartE2EDuration="35.533116342s" podCreationTimestamp="2024-12-13 02:05:52 +0000 UTC" firstStartedPulling="2024-12-13 02:06:17.52397103 +0000 UTC m=+44.632852788" lastFinishedPulling="2024-12-13 02:06:24.251674508 +0000 UTC m=+51.360556266" observedRunningTime="2024-12-13 02:06:24.54817666 +0000 UTC m=+51.657058447" watchObservedRunningTime="2024-12-13 02:06:27.533116342 +0000 UTC m=+54.641998089" Dec 13 02:06:27.534222 kubelet[2808]: I1213 02:06:27.533998 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fc5fc95d-b9l5l" podStartSLOduration=27.990431405 podStartE2EDuration="35.533459242s" podCreationTimestamp="2024-12-13 02:05:52 +0000 UTC" firstStartedPulling="2024-12-13 02:06:19.719691801 +0000 UTC m=+46.828573559" lastFinishedPulling="2024-12-13 02:06:27.262719639 +0000 UTC m=+54.371601396" observedRunningTime="2024-12-13 02:06:27.532939074 +0000 UTC m=+54.641820852" watchObservedRunningTime="2024-12-13 02:06:27.533459242 +0000 UTC m=+54.642341000" Dec 13 02:06:27.722837 containerd[1497]: time="2024-12-13T02:06:27.720149697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 02:06:27.728713 containerd[1497]: time="2024-12-13T02:06:27.728658498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 465.523691ms" Dec 13 02:06:27.728892 containerd[1497]: time="2024-12-13T02:06:27.728863568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:06:27.736894 containerd[1497]: time="2024-12-13T02:06:27.736859146Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:06:27.738588 containerd[1497]: time="2024-12-13T02:06:27.738557071Z" level=info msg="CreateContainer within sandbox \"d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:06:27.764090 containerd[1497]: time="2024-12-13T02:06:27.763966415Z" level=info msg="CreateContainer within sandbox \"d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de7de9eb92bec371d5fd57393ae454dcb8a94838507cdd120fd026ce45867220\"" Dec 13 02:06:27.766538 containerd[1497]: time="2024-12-13T02:06:27.766494127Z" level=info msg="StartContainer for \"de7de9eb92bec371d5fd57393ae454dcb8a94838507cdd120fd026ce45867220\"" Dec 13 02:06:27.805274 systemd[1]: Started cri-containerd-de7de9eb92bec371d5fd57393ae454dcb8a94838507cdd120fd026ce45867220.scope - libcontainer container de7de9eb92bec371d5fd57393ae454dcb8a94838507cdd120fd026ce45867220. Dec 13 02:06:27.880962 containerd[1497]: time="2024-12-13T02:06:27.880903244Z" level=info msg="StartContainer for \"de7de9eb92bec371d5fd57393ae454dcb8a94838507cdd120fd026ce45867220\" returns successfully" Dec 13 02:06:28.282716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671133045.mount: Deactivated successfully. Dec 13 02:06:28.686960 kubelet[2808]: I1213 02:06:28.686792 2808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fc5fc95d-f8q5t" podStartSLOduration=29.464363425 podStartE2EDuration="36.686746954s" podCreationTimestamp="2024-12-13 02:05:52 +0000 UTC" firstStartedPulling="2024-12-13 02:06:20.509370079 +0000 UTC m=+47.618251836" lastFinishedPulling="2024-12-13 02:06:27.731753588 +0000 UTC m=+54.840635365" observedRunningTime="2024-12-13 02:06:28.535679514 +0000 UTC m=+55.644561271" watchObservedRunningTime="2024-12-13 02:06:28.686746954 +0000 UTC m=+55.795628711" Dec 13 02:06:33.278314 containerd[1497]: time="2024-12-13T02:06:33.278228468Z" level=info msg="StopPodSandbox for \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\"" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.444 [WARNING][5118] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef8df160-b623-42c6-abf0-520b155176f4", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1", Pod:"csi-node-driver-7r499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f276ea8314", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.446 [INFO][5118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.446 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" iface="eth0" netns="" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.446 [INFO][5118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.447 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.488 [INFO][5124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.488 [INFO][5124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.488 [INFO][5124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.493 [WARNING][5124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.493 [INFO][5124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.495 [INFO][5124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:33.500957 containerd[1497]: 2024-12-13 02:06:33.498 [INFO][5118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.504122 containerd[1497]: time="2024-12-13T02:06:33.501536033Z" level=info msg="TearDown network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\" successfully" Dec 13 02:06:33.504122 containerd[1497]: time="2024-12-13T02:06:33.501566431Z" level=info msg="StopPodSandbox for \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\" returns successfully" Dec 13 02:06:33.540323 containerd[1497]: time="2024-12-13T02:06:33.539328786Z" level=info msg="RemovePodSandbox for \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\"" Dec 13 02:06:33.542643 containerd[1497]: time="2024-12-13T02:06:33.542603317Z" level=info msg="Forcibly stopping sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\"" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.615 [WARNING][5143] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef8df160-b623-42c6-abf0-520b155176f4", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"7633383347db8ff8619527348093bee3fea1d07ace630f75b9187c3052b761b1", Pod:"csi-node-driver-7r499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f276ea8314", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.615 [INFO][5143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.615 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" iface="eth0" netns="" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.616 [INFO][5143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.616 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.648 [INFO][5150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.648 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.648 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.654 [WARNING][5150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.654 [INFO][5150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" HandleID="k8s-pod-network.812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Workload="ci--4081--2--1--3--45a43b40ef-k8s-csi--node--driver--7r499-eth0" Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.655 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:33.666041 containerd[1497]: 2024-12-13 02:06:33.661 [INFO][5143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062" Dec 13 02:06:33.669541 containerd[1497]: time="2024-12-13T02:06:33.666091768Z" level=info msg="TearDown network for sandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\" successfully" Dec 13 02:06:33.676485 containerd[1497]: time="2024-12-13T02:06:33.676424279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:06:33.714850 containerd[1497]: time="2024-12-13T02:06:33.714757588Z" level=info msg="RemovePodSandbox \"812fdb32b2333b6cb6024b9136fd6339c5dce35f22a6fea4e4d43015b38c6062\" returns successfully" Dec 13 02:06:33.716129 containerd[1497]: time="2024-12-13T02:06:33.716090311Z" level=info msg="StopPodSandbox for \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\"" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.758 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0", GenerateName:"calico-kube-controllers-6968b48768-", Namespace:"calico-system", SelfLink:"", UID:"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6968b48768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a", Pod:"calico-kube-controllers-6968b48768-kvc6c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2ea961f68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.759 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.759 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" iface="eth0" netns="" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.759 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.759 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.781 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.781 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.781 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.787 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.787 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.789 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:33.794648 containerd[1497]: 2024-12-13 02:06:33.791 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.794648 containerd[1497]: time="2024-12-13T02:06:33.794393025Z" level=info msg="TearDown network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\" successfully" Dec 13 02:06:33.794648 containerd[1497]: time="2024-12-13T02:06:33.794443110Z" level=info msg="StopPodSandbox for \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\" returns successfully" Dec 13 02:06:33.797288 containerd[1497]: time="2024-12-13T02:06:33.795491091Z" level=info msg="RemovePodSandbox for \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\"" Dec 13 02:06:33.797288 containerd[1497]: time="2024-12-13T02:06:33.795519886Z" level=info msg="Forcibly stopping sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\"" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.839 [WARNING][5192] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0", GenerateName:"calico-kube-controllers-6968b48768-", Namespace:"calico-system", SelfLink:"", UID:"1ecf16d6-9a75-45ea-bf2b-4a4e040aadee", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6968b48768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"8b0e9f7f1764b82d6d333f9083522b5cf1d64cb81f1854f3b48de7fadcbcbe2a", Pod:"calico-kube-controllers-6968b48768-kvc6c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb2ea961f68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.839 [INFO][5192] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.839 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" iface="eth0" netns="" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.839 [INFO][5192] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.839 [INFO][5192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.866 [INFO][5198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.866 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.866 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.871 [WARNING][5198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.871 [INFO][5198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" HandleID="k8s-pod-network.b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--kube--controllers--6968b48768--kvc6c-eth0" Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.873 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:33.879144 containerd[1497]: 2024-12-13 02:06:33.876 [INFO][5192] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab" Dec 13 02:06:33.880232 containerd[1497]: time="2024-12-13T02:06:33.879190691Z" level=info msg="TearDown network for sandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\" successfully" Dec 13 02:06:33.909123 containerd[1497]: time="2024-12-13T02:06:33.909048897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:06:33.909345 containerd[1497]: time="2024-12-13T02:06:33.909138518Z" level=info msg="RemovePodSandbox \"b69cc001b83a042ad72130dc377bba2c0f1e641d15a56b8494802d22e2847cab\" returns successfully" Dec 13 02:06:33.909898 containerd[1497]: time="2024-12-13T02:06:33.909874706Z" level=info msg="StopPodSandbox for \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\"" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.945 [WARNING][5216] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a601425-be16-4451-92a5-52bc1704e2e9", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465", Pod:"calico-apiserver-6fc5fc95d-f8q5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd63da416b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.945 [INFO][5216] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.945 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" iface="eth0" netns="" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.946 [INFO][5216] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.946 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.969 [INFO][5222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.969 [INFO][5222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.969 [INFO][5222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.975 [WARNING][5222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.975 [INFO][5222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.976 [INFO][5222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:33.982527 containerd[1497]: 2024-12-13 02:06:33.979 [INFO][5216] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:33.983650 containerd[1497]: time="2024-12-13T02:06:33.982595597Z" level=info msg="TearDown network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\" successfully" Dec 13 02:06:33.983650 containerd[1497]: time="2024-12-13T02:06:33.982633589Z" level=info msg="StopPodSandbox for \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\" returns successfully" Dec 13 02:06:33.988757 containerd[1497]: time="2024-12-13T02:06:33.988711757Z" level=info msg="RemovePodSandbox for \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\"" Dec 13 02:06:33.988757 containerd[1497]: time="2024-12-13T02:06:33.988751372Z" level=info msg="Forcibly stopping sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\"" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.025 [WARNING][5241] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a601425-be16-4451-92a5-52bc1704e2e9", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"d49ab741004dedb18953e80316e1325168590aeadfc9ae3ebf66dbfa2bfd8465", Pod:"calico-apiserver-6fc5fc95d-f8q5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcd63da416b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.026 [INFO][5241] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.026 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" iface="eth0" netns="" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.026 [INFO][5241] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.026 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.045 [INFO][5247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.045 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.045 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.052 [WARNING][5247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.052 [INFO][5247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" HandleID="k8s-pod-network.0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--f8q5t-eth0" Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.055 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:34.065802 containerd[1497]: 2024-12-13 02:06:34.059 [INFO][5241] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3" Dec 13 02:06:34.065802 containerd[1497]: time="2024-12-13T02:06:34.065567827Z" level=info msg="TearDown network for sandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\" successfully" Dec 13 02:06:34.078710 containerd[1497]: time="2024-12-13T02:06:34.078636568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:06:34.079238 containerd[1497]: time="2024-12-13T02:06:34.079095078Z" level=info msg="RemovePodSandbox \"0b0adbc28d580b89cd7303ce1356d4a6d4acdc487e2a3c9e523d5458453b5bb3\" returns successfully" Dec 13 02:06:34.080154 containerd[1497]: time="2024-12-13T02:06:34.080116198Z" level=info msg="StopPodSandbox for \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\"" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.131 [WARNING][5265] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a5304a-ad78-4750-a5d4-86e0afbc564d", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de", Pod:"coredns-76f75df574-9jrk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid032e899688", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.131 [INFO][5265] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.132 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" iface="eth0" netns="" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.132 [INFO][5265] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.132 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.155 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.156 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.156 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.163 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.163 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.165 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:34.179093 containerd[1497]: 2024-12-13 02:06:34.172 [INFO][5265] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.179093 containerd[1497]: time="2024-12-13T02:06:34.176299871Z" level=info msg="TearDown network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\" successfully" Dec 13 02:06:34.179093 containerd[1497]: time="2024-12-13T02:06:34.176335439Z" level=info msg="StopPodSandbox for \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\" returns successfully" Dec 13 02:06:34.179093 containerd[1497]: time="2024-12-13T02:06:34.176774974Z" level=info msg="RemovePodSandbox for \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\"" Dec 13 02:06:34.179093 containerd[1497]: time="2024-12-13T02:06:34.176807114Z" level=info msg="Forcibly stopping sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\"" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.241 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"14a5304a-ad78-4750-a5d4-86e0afbc564d", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"abe401820b214df53720e5cb3c0364f7b858b878ad8091f0615c25cb5eb0e4de", Pod:"coredns-76f75df574-9jrk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid032e899688", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.241 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.242 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" iface="eth0" netns="" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.242 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.242 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.267 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.268 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.268 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.273 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.273 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" HandleID="k8s-pod-network.4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--9jrk5-eth0" Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.275 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:34.280375 containerd[1497]: 2024-12-13 02:06:34.277 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0" Dec 13 02:06:34.280375 containerd[1497]: time="2024-12-13T02:06:34.280360951Z" level=info msg="TearDown network for sandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\" successfully" Dec 13 02:06:34.297097 containerd[1497]: time="2024-12-13T02:06:34.297021905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:06:34.297097 containerd[1497]: time="2024-12-13T02:06:34.297098350Z" level=info msg="RemovePodSandbox \"4ba871e61acaf542bf34379bccef8f09d05200c72947c9f4fa7f4ed1beaa64a0\" returns successfully" Dec 13 02:06:34.297717 containerd[1497]: time="2024-12-13T02:06:34.297685907Z" level=info msg="StopPodSandbox for \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\"" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.351 [WARNING][5314] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"54fc91ff-1a5b-460c-b104-ee7484562222", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62", Pod:"calico-apiserver-6fc5fc95d-b9l5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc46127ec77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.351 [INFO][5314] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.351 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" iface="eth0" netns="" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.351 [INFO][5314] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.352 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.393 [INFO][5320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.393 [INFO][5320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.393 [INFO][5320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.399 [WARNING][5320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.399 [INFO][5320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.400 [INFO][5320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:34.407718 containerd[1497]: 2024-12-13 02:06:34.404 [INFO][5314] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.407718 containerd[1497]: time="2024-12-13T02:06:34.407315886Z" level=info msg="TearDown network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\" successfully" Dec 13 02:06:34.407718 containerd[1497]: time="2024-12-13T02:06:34.407342366Z" level=info msg="StopPodSandbox for \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\" returns successfully" Dec 13 02:06:34.411481 containerd[1497]: time="2024-12-13T02:06:34.409034602Z" level=info msg="RemovePodSandbox for \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\"" Dec 13 02:06:34.411481 containerd[1497]: time="2024-12-13T02:06:34.409060601Z" level=info msg="Forcibly stopping sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\"" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.451 [WARNING][5338] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0", GenerateName:"calico-apiserver-6fc5fc95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"54fc91ff-1a5b-460c-b104-ee7484562222", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc5fc95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"fb69b941cbcaf183990bd882b6ca67fd7ba07a5d08c605ef54d7d8f8dec07d62", Pod:"calico-apiserver-6fc5fc95d-b9l5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc46127ec77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.451 [INFO][5338] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.451 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" iface="eth0" netns="" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.451 [INFO][5338] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.451 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.485 [INFO][5345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.486 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.486 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.494 [WARNING][5345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.495 [INFO][5345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" HandleID="k8s-pod-network.30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Workload="ci--4081--2--1--3--45a43b40ef-k8s-calico--apiserver--6fc5fc95d--b9l5l-eth0" Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.497 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:34.505350 containerd[1497]: 2024-12-13 02:06:34.501 [INFO][5338] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff" Dec 13 02:06:34.507097 containerd[1497]: time="2024-12-13T02:06:34.505396913Z" level=info msg="TearDown network for sandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\" successfully" Dec 13 02:06:34.511734 containerd[1497]: time="2024-12-13T02:06:34.511511411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:06:34.511734 containerd[1497]: time="2024-12-13T02:06:34.511592395Z" level=info msg="RemovePodSandbox \"30dede1645a2f0befbd76ac141cf7eeb1e1d958a5f562de6573d68aedab843ff\" returns successfully" Dec 13 02:06:34.512519 containerd[1497]: time="2024-12-13T02:06:34.512279240Z" level=info msg="StopPodSandbox for \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\"" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.563 [WARNING][5363] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"083af0c1-68d3-4b5d-89d6-41223201f0ed", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455", Pod:"coredns-76f75df574-hq4w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali059ff5373c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.563 [INFO][5363] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.563 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" iface="eth0" netns="" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.563 [INFO][5363] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.563 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.596 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.597 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.597 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.611 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.612 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.620 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:34.627968 containerd[1497]: 2024-12-13 02:06:34.624 [INFO][5363] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.629628 containerd[1497]: time="2024-12-13T02:06:34.628159224Z" level=info msg="TearDown network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\" successfully" Dec 13 02:06:34.629628 containerd[1497]: time="2024-12-13T02:06:34.628711162Z" level=info msg="StopPodSandbox for \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\" returns successfully" Dec 13 02:06:34.629628 containerd[1497]: time="2024-12-13T02:06:34.629365175Z" level=info msg="RemovePodSandbox for \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\"" Dec 13 02:06:34.629628 containerd[1497]: time="2024-12-13T02:06:34.629388981Z" level=info msg="Forcibly stopping sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\"" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.673 [WARNING][5402] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"083af0c1-68d3-4b5d-89d6-41223201f0ed", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-3-45a43b40ef", ContainerID:"1744a34f01654bd025a8a68a4cd64861d1c4705852d964ad3cb9df4c28eab455", Pod:"coredns-76f75df574-hq4w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali059ff5373c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.673 [INFO][5402] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.673 [INFO][5402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" iface="eth0" netns="" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.673 [INFO][5402] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.673 [INFO][5402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.697 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.697 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.697 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.704 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.704 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" HandleID="k8s-pod-network.aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Workload="ci--4081--2--1--3--45a43b40ef-k8s-coredns--76f75df574--hq4w8-eth0" Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.706 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:06:34.712789 containerd[1497]: 2024-12-13 02:06:34.708 [INFO][5402] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc" Dec 13 02:06:34.715241 containerd[1497]: time="2024-12-13T02:06:34.712760791Z" level=info msg="TearDown network for sandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\" successfully" Dec 13 02:06:34.719578 containerd[1497]: time="2024-12-13T02:06:34.719458106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:06:34.719639 containerd[1497]: time="2024-12-13T02:06:34.719623089Z" level=info msg="RemovePodSandbox \"aec71d6546eb75bc7bb08591c756d8033fd61592a38fd6c2f4f0e890670151dc\" returns successfully" Dec 13 02:08:13.400629 systemd[1]: Started sshd@7-49.13.63.199:22-120.55.192.3:48554.service - OpenSSH per-connection server daemon (120.55.192.3:48554). Dec 13 02:08:14.697462 sshd[5619]: Connection closed by authenticating user root 120.55.192.3 port 48554 [preauth] Dec 13 02:08:14.700603 systemd[1]: sshd@7-49.13.63.199:22-120.55.192.3:48554.service: Deactivated successfully. Dec 13 02:09:26.986608 systemd[1]: run-containerd-runc-k8s.io-00745217febf2e638a94d5b1a28530309b235cc8e4342d5d636f86b0844d967a-runc.rN3eNK.mount: Deactivated successfully. Dec 13 02:10:24.130601 systemd[1]: Started sshd@8-49.13.63.199:22-147.75.109.163:58846.service - OpenSSH per-connection server daemon (147.75.109.163:58846). Dec 13 02:10:25.157251 sshd[5891]: Accepted publickey for core from 147.75.109.163 port 58846 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:25.166935 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:25.182263 systemd-logind[1472]: New session 8 of user core. Dec 13 02:10:25.191321 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:10:26.651826 sshd[5891]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:26.658558 systemd[1]: sshd@8-49.13.63.199:22-147.75.109.163:58846.service: Deactivated successfully. Dec 13 02:10:26.662876 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:10:26.668423 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:10:26.671538 systemd-logind[1472]: Removed session 8. Dec 13 02:10:31.839211 systemd[1]: Started sshd@9-49.13.63.199:22-147.75.109.163:41060.service - OpenSSH per-connection server daemon (147.75.109.163:41060). Dec 13 02:10:32.860311 sshd[5925]: Accepted publickey for core from 147.75.109.163 port 41060 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:32.865962 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:32.877310 systemd-logind[1472]: New session 9 of user core. Dec 13 02:10:32.884329 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:10:33.851585 sshd[5925]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:33.860379 systemd[1]: sshd@9-49.13.63.199:22-147.75.109.163:41060.service: Deactivated successfully. Dec 13 02:10:33.864749 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:10:33.867182 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:10:33.869423 systemd-logind[1472]: Removed session 9. Dec 13 02:10:39.031667 systemd[1]: Started sshd@10-49.13.63.199:22-147.75.109.163:49396.service - OpenSSH per-connection server daemon (147.75.109.163:49396). Dec 13 02:10:40.062249 sshd[5960]: Accepted publickey for core from 147.75.109.163 port 49396 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:40.066982 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:40.078515 systemd-logind[1472]: New session 10 of user core. Dec 13 02:10:40.083264 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:10:40.879278 sshd[5960]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:40.884932 systemd[1]: sshd@10-49.13.63.199:22-147.75.109.163:49396.service: Deactivated successfully. Dec 13 02:10:40.887568 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:10:40.888494 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:10:40.889716 systemd-logind[1472]: Removed session 10. Dec 13 02:10:41.065451 systemd[1]: Started sshd@11-49.13.63.199:22-147.75.109.163:49412.service - OpenSSH per-connection server daemon (147.75.109.163:49412). Dec 13 02:10:42.081908 sshd[5974]: Accepted publickey for core from 147.75.109.163 port 49412 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:42.084838 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:42.094907 systemd-logind[1472]: New session 11 of user core. Dec 13 02:10:42.101254 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:10:42.911125 sshd[5974]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:42.921643 systemd[1]: sshd@11-49.13.63.199:22-147.75.109.163:49412.service: Deactivated successfully. Dec 13 02:10:42.926210 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:10:42.928783 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:10:42.931384 systemd-logind[1472]: Removed session 11. Dec 13 02:10:43.105295 systemd[1]: Started sshd@12-49.13.63.199:22-147.75.109.163:49428.service - OpenSSH per-connection server daemon (147.75.109.163:49428). Dec 13 02:10:44.110252 sshd[5989]: Accepted publickey for core from 147.75.109.163 port 49428 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:44.115326 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:44.124928 systemd-logind[1472]: New session 12 of user core. Dec 13 02:10:44.134518 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:10:44.940229 sshd[5989]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:44.946236 systemd[1]: sshd@12-49.13.63.199:22-147.75.109.163:49428.service: Deactivated successfully. Dec 13 02:10:44.951738 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:10:44.955257 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:10:44.957715 systemd-logind[1472]: Removed session 12. Dec 13 02:10:50.127677 systemd[1]: Started sshd@13-49.13.63.199:22-147.75.109.163:36134.service - OpenSSH per-connection server daemon (147.75.109.163:36134). Dec 13 02:10:51.142291 sshd[6004]: Accepted publickey for core from 147.75.109.163 port 36134 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:51.147862 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:51.161258 systemd-logind[1472]: New session 13 of user core. Dec 13 02:10:51.172254 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:10:51.974947 sshd[6004]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:51.982817 systemd[1]: sshd@13-49.13.63.199:22-147.75.109.163:36134.service: Deactivated successfully. Dec 13 02:10:51.990891 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:10:51.993789 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:10:51.995580 systemd-logind[1472]: Removed session 13. Dec 13 02:10:52.152632 systemd[1]: Started sshd@14-49.13.63.199:22-147.75.109.163:36146.service - OpenSSH per-connection server daemon (147.75.109.163:36146). Dec 13 02:10:53.173449 sshd[6039]: Accepted publickey for core from 147.75.109.163 port 36146 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:53.177309 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:53.188287 systemd-logind[1472]: New session 14 of user core. Dec 13 02:10:53.194297 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:10:54.314625 sshd[6039]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:54.324588 systemd[1]: sshd@14-49.13.63.199:22-147.75.109.163:36146.service: Deactivated successfully. Dec 13 02:10:54.330037 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:10:54.332595 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:10:54.335940 systemd-logind[1472]: Removed session 14. Dec 13 02:10:54.492671 systemd[1]: Started sshd@15-49.13.63.199:22-147.75.109.163:36160.service - OpenSSH per-connection server daemon (147.75.109.163:36160). Dec 13 02:10:55.527363 sshd[6049]: Accepted publickey for core from 147.75.109.163 port 36160 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:55.533066 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:55.543614 systemd-logind[1472]: New session 15 of user core. Dec 13 02:10:55.548481 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:10:58.507613 sshd[6049]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:58.514379 systemd[1]: sshd@15-49.13.63.199:22-147.75.109.163:36160.service: Deactivated successfully. Dec 13 02:10:58.523470 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:10:58.529049 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:10:58.532351 systemd-logind[1472]: Removed session 15. Dec 13 02:10:58.685209 systemd[1]: Started sshd@16-49.13.63.199:22-147.75.109.163:51124.service - OpenSSH per-connection server daemon (147.75.109.163:51124). Dec 13 02:10:59.725495 sshd[6068]: Accepted publickey for core from 147.75.109.163 port 51124 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:10:59.729131 sshd[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:10:59.739845 systemd-logind[1472]: New session 16 of user core. Dec 13 02:10:59.747302 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:11:00.872461 sshd[6068]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:00.881129 systemd[1]: sshd@16-49.13.63.199:22-147.75.109.163:51124.service: Deactivated successfully. Dec 13 02:11:00.885754 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:11:00.887652 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:11:00.890251 systemd-logind[1472]: Removed session 16. Dec 13 02:11:01.046417 systemd[1]: Started sshd@17-49.13.63.199:22-147.75.109.163:51128.service - OpenSSH per-connection server daemon (147.75.109.163:51128). Dec 13 02:11:02.045791 sshd[6079]: Accepted publickey for core from 147.75.109.163 port 51128 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:11:02.049877 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:11:02.059738 systemd-logind[1472]: New session 17 of user core. Dec 13 02:11:02.066295 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:11:02.857695 sshd[6079]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:02.867450 systemd[1]: sshd@17-49.13.63.199:22-147.75.109.163:51128.service: Deactivated successfully. Dec 13 02:11:02.872986 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:11:02.874797 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:11:02.876913 systemd-logind[1472]: Removed session 17. Dec 13 02:11:08.033510 systemd[1]: Started sshd@18-49.13.63.199:22-147.75.109.163:45178.service - OpenSSH per-connection server daemon (147.75.109.163:45178). Dec 13 02:11:09.038474 sshd[6131]: Accepted publickey for core from 147.75.109.163 port 45178 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:11:09.042999 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:11:09.054272 systemd-logind[1472]: New session 18 of user core. Dec 13 02:11:09.060535 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:11:09.872312 sshd[6131]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:09.882779 systemd[1]: sshd@18-49.13.63.199:22-147.75.109.163:45178.service: Deactivated successfully. Dec 13 02:11:09.891560 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:11:09.893470 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:11:09.895977 systemd-logind[1472]: Removed session 18. Dec 13 02:11:15.046378 systemd[1]: Started sshd@19-49.13.63.199:22-147.75.109.163:45186.service - OpenSSH per-connection server daemon (147.75.109.163:45186). Dec 13 02:11:16.057542 sshd[6144]: Accepted publickey for core from 147.75.109.163 port 45186 ssh2: RSA SHA256:sCKVP2ZoT/a84yhHrxpuO7m4jAwnggg/oTrfebs5XY0 Dec 13 02:11:16.062731 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:11:16.072737 systemd-logind[1472]: New session 19 of user core. Dec 13 02:11:16.078416 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:11:16.895056 sshd[6144]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:16.903707 systemd[1]: sshd@19-49.13.63.199:22-147.75.109.163:45186.service: Deactivated successfully. Dec 13 02:11:16.910402 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:11:16.912776 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:11:16.915649 systemd-logind[1472]: Removed session 19. Dec 13 02:11:33.410958 systemd[1]: cri-containerd-deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1.scope: Deactivated successfully. Dec 13 02:11:33.411638 systemd[1]: cri-containerd-deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1.scope: Consumed 5.911s CPU time. Dec 13 02:11:33.541965 systemd[1]: cri-containerd-b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260.scope: Deactivated successfully. Dec 13 02:11:33.542553 systemd[1]: cri-containerd-b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260.scope: Consumed 9.020s CPU time, 28.6M memory peak, 0B memory swap peak. Dec 13 02:11:33.597333 systemd[1]: cri-containerd-451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb.scope: Deactivated successfully. Dec 13 02:11:33.597753 systemd[1]: cri-containerd-451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb.scope: Consumed 2.189s CPU time, 14.7M memory peak, 0B memory swap peak. Dec 13 02:11:33.620379 kubelet[2808]: E1213 02:11:33.611758 2808 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56398->10.0.0.2:2379: read: connection timed out" Dec 13 02:11:33.664948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260-rootfs.mount: Deactivated successfully. Dec 13 02:11:33.676555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb-rootfs.mount: Deactivated successfully. Dec 13 02:11:33.680541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1-rootfs.mount: Deactivated successfully. Dec 13 02:11:33.683187 containerd[1497]: time="2024-12-13T02:11:33.657648915Z" level=info msg="shim disconnected" id=b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260 namespace=k8s.io Dec 13 02:11:33.685784 containerd[1497]: time="2024-12-13T02:11:33.662631880Z" level=info msg="shim disconnected" id=451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb namespace=k8s.io Dec 13 02:11:33.688322 containerd[1497]: time="2024-12-13T02:11:33.688192674Z" level=warning msg="cleaning up after shim disconnected" id=451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb namespace=k8s.io Dec 13 02:11:33.688322 containerd[1497]: time="2024-12-13T02:11:33.688216038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:11:33.689501 containerd[1497]: time="2024-12-13T02:11:33.689368647Z" level=warning msg="cleaning up after shim disconnected" id=b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260 namespace=k8s.io Dec 13 02:11:33.689501 containerd[1497]: time="2024-12-13T02:11:33.689386962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:11:33.699941 containerd[1497]: time="2024-12-13T02:11:33.673933779Z" level=info msg="shim disconnected" id=deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1 namespace=k8s.io Dec 13 02:11:33.699941 containerd[1497]: time="2024-12-13T02:11:33.699104436Z" level=warning msg="cleaning up after shim disconnected" id=deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1 namespace=k8s.io Dec 13 02:11:33.699941 containerd[1497]: time="2024-12-13T02:11:33.699112581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:11:33.802850 containerd[1497]: time="2024-12-13T02:11:33.802050385Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:11:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 02:11:34.713617 kubelet[2808]: I1213 02:11:34.713567 2808 scope.go:117] "RemoveContainer" containerID="deaa91a1e5ef0150a01c7578948634fda360f61a8879a59e9a5333cfbd5c5df1" Dec 13 02:11:34.717624 kubelet[2808]: I1213 02:11:34.717511 2808 scope.go:117] "RemoveContainer" containerID="451e500abfd3927a49bc4ffa99fbda25cada86a5fcefbd625999611bb101d3bb" Dec 13 02:11:34.718934 kubelet[2808]: I1213 02:11:34.718905 2808 scope.go:117] "RemoveContainer" containerID="b33824d79dffb53c546ca29bb2f135069fcfbf2fd718199ec903f1ca4980f260" Dec 13 02:11:34.798235 containerd[1497]: time="2024-12-13T02:11:34.798046846Z" level=info msg="CreateContainer within sandbox \"c38d7c2bba2e7eeb132fbbf8908aaf76530ad5f6b70be3098838af7744e5fa2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 02:11:34.798235 containerd[1497]: time="2024-12-13T02:11:34.798115523Z" level=info msg="CreateContainer within sandbox \"793d4c20f2751a07eb18ba333c47a59d7842e2cd3bfa338973974ce514368aa5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 02:11:34.808233 containerd[1497]: time="2024-12-13T02:11:34.807815236Z" level=info msg="CreateContainer within sandbox \"8eaa339d4163036802ba2207ac4f985589964539f50fab2e92387dc6e39635ff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 02:11:34.861863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874132387.mount: Deactivated successfully. Dec 13 02:11:34.876106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2741322978.mount: Deactivated successfully. Dec 13 02:11:34.884298 containerd[1497]: time="2024-12-13T02:11:34.883874504Z" level=info msg="CreateContainer within sandbox \"c38d7c2bba2e7eeb132fbbf8908aaf76530ad5f6b70be3098838af7744e5fa2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7bbf1c514faa6bf131a1614c00ee6155e2c99b6921cb2e595012b9ddce0a8b6a\"" Dec 13 02:11:34.884697 containerd[1497]: time="2024-12-13T02:11:34.884581082Z" level=info msg="StartContainer for \"7bbf1c514faa6bf131a1614c00ee6155e2c99b6921cb2e595012b9ddce0a8b6a\"" Dec 13 02:11:34.888251 containerd[1497]: time="2024-12-13T02:11:34.888216793Z" level=info msg="CreateContainer within sandbox \"8eaa339d4163036802ba2207ac4f985589964539f50fab2e92387dc6e39635ff\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6d79ce6255ee0805f828dc2d589c50d7d51aeba7774b91ccb43877616ae5e520\"" Dec 13 02:11:34.891220 containerd[1497]: time="2024-12-13T02:11:34.890546499Z" level=info msg="StartContainer for \"6d79ce6255ee0805f828dc2d589c50d7d51aeba7774b91ccb43877616ae5e520\"" Dec 13 02:11:34.896306 containerd[1497]: time="2024-12-13T02:11:34.896277128Z" level=info msg="CreateContainer within sandbox \"793d4c20f2751a07eb18ba333c47a59d7842e2cd3bfa338973974ce514368aa5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d563fe47a8ae5d5fae7a89c866eb1d66a824ea93a263e8b01085102964c77c65\"" Dec 13 02:11:34.896929 containerd[1497]: time="2024-12-13T02:11:34.896892876Z" level=info msg="StartContainer for \"d563fe47a8ae5d5fae7a89c866eb1d66a824ea93a263e8b01085102964c77c65\"" Dec 13 02:11:34.941352 systemd[1]: Started cri-containerd-7bbf1c514faa6bf131a1614c00ee6155e2c99b6921cb2e595012b9ddce0a8b6a.scope - libcontainer container 7bbf1c514faa6bf131a1614c00ee6155e2c99b6921cb2e595012b9ddce0a8b6a. Dec 13 02:11:34.957247 systemd[1]: Started cri-containerd-d563fe47a8ae5d5fae7a89c866eb1d66a824ea93a263e8b01085102964c77c65.scope - libcontainer container d563fe47a8ae5d5fae7a89c866eb1d66a824ea93a263e8b01085102964c77c65. Dec 13 02:11:34.969438 systemd[1]: Started cri-containerd-6d79ce6255ee0805f828dc2d589c50d7d51aeba7774b91ccb43877616ae5e520.scope - libcontainer container 6d79ce6255ee0805f828dc2d589c50d7d51aeba7774b91ccb43877616ae5e520. Dec 13 02:11:35.036654 containerd[1497]: time="2024-12-13T02:11:35.036554750Z" level=info msg="StartContainer for \"d563fe47a8ae5d5fae7a89c866eb1d66a824ea93a263e8b01085102964c77c65\" returns successfully" Dec 13 02:11:35.050739 containerd[1497]: time="2024-12-13T02:11:35.050633875Z" level=info msg="StartContainer for \"7bbf1c514faa6bf131a1614c00ee6155e2c99b6921cb2e595012b9ddce0a8b6a\" returns successfully" Dec 13 02:11:35.063140 containerd[1497]: time="2024-12-13T02:11:35.063080675Z" level=info msg="StartContainer for \"6d79ce6255ee0805f828dc2d589c50d7d51aeba7774b91ccb43877616ae5e520\" returns successfully" Dec 13 02:11:38.611204 kubelet[2808]: E1213 02:11:38.611113 2808 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56216->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-3-45a43b40ef.18109aa48b3e9534 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-3-45a43b40ef,UID:1ad593c967432868037640a67945c522,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-3-45a43b40ef,},FirstTimestamp:2024-12-13 02:11:28.03908946 +0000 UTC m=+355.147971247,LastTimestamp:2024-12-13 02:11:28.03908946 +0000 UTC m=+355.147971247,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-3-45a43b40ef,}"