May 8 00:38:55.927246 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:38:55.927271 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:38:55.927284 kernel: BIOS-provided physical RAM map: May 8 00:38:55.927290 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:38:55.927296 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:38:55.927303 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:38:55.927310 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:38:55.927316 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:38:55.927323 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:38:55.927329 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:38:55.927339 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 8 00:38:55.927345 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 8 00:38:55.927354 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 8 00:38:55.927361 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 8 00:38:55.927372 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:38:55.927379 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:38:55.927390 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:38:55.927397 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:38:55.927403 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:38:55.927410 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:38:55.927417 kernel: NX (Execute Disable) protection: active May 8 00:38:55.927424 kernel: APIC: Static calls initialized May 8 00:38:55.927430 kernel: efi: EFI v2.7 by EDK II May 8 00:38:55.927437 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 8 00:38:55.927444 kernel: SMBIOS 2.8 present. May 8 00:38:55.927451 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 8 00:38:55.927457 kernel: Hypervisor detected: KVM May 8 00:38:55.927468 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:38:55.927474 kernel: kvm-clock: using sched offset of 4947518861 cycles May 8 00:38:55.927481 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:38:55.927489 kernel: tsc: Detected 2794.748 MHz processor May 8 00:38:55.927496 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:38:55.927503 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:38:55.927510 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 8 00:38:55.927518 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:38:55.927525 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:38:55.927535 kernel: Using GB pages for direct mapping May 8 00:38:55.927542 kernel: Secure boot disabled May 8 00:38:55.927548 kernel: ACPI: Early table checksum verification disabled May 8 00:38:55.927555 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:38:55.927567 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:38:55.927575 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:55.927582 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:55.927592 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:38:55.927600 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:55.927610 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:55.927617 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:55.927625 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:38:55.927632 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:38:55.927639 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:38:55.927650 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:38:55.927657 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:38:55.927665 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:38:55.927672 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:38:55.927679 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:38:55.927686 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:38:55.927693 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:38:55.927700 kernel: No NUMA configuration found May 8 00:38:55.927710 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 8 00:38:55.927722 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 8 00:38:55.927729 kernel: Zone ranges: May 8 00:38:55.927736 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:38:55.927744 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 8 00:38:55.927751 kernel: Normal empty May 8 00:38:55.927758 kernel: Movable zone start for each node May 8 00:38:55.927765 kernel: Early memory node ranges May 8 00:38:55.927773 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:38:55.927780 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:38:55.927787 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:38:55.927797 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 8 00:38:55.927805 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 8 00:38:55.927812 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 8 00:38:55.927821 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 8 00:38:55.927829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:38:55.927836 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:38:55.927843 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:38:55.927850 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:38:55.927857 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 8 00:38:55.927868 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:38:55.927876 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 8 00:38:55.927883 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:38:55.927890 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:38:55.927897 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:38:55.927904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:38:55.927912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:38:55.927919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:38:55.927926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:38:55.927937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:38:55.927944 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:38:55.927951 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:38:55.927958 kernel: TSC deadline timer available May 8 00:38:55.927965 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:38:55.927973 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:38:55.927980 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:38:55.927987 kernel: kvm-guest: setup PV sched yield May 8 00:38:55.927994 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 8 00:38:55.928005 kernel: Booting paravirtualized kernel on KVM May 8 00:38:55.928012 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:38:55.928020 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:38:55.928027 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:38:55.928034 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:38:55.928041 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:38:55.928061 kernel: kvm-guest: PV spinlocks enabled May 8 00:38:55.928068 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:38:55.928077 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:38:55.928091 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:38:55.928098 kernel: random: crng init done May 8 00:38:55.928105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:38:55.928113 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:38:55.928120 kernel: Fallback order for Node 0: 0 May 8 00:38:55.928127 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 8 00:38:55.928135 kernel: Policy zone: DMA32 May 8 00:38:55.928142 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:38:55.928153 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 166140K reserved, 0K cma-reserved) May 8 00:38:55.928161 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:38:55.928168 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:38:55.928181 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:38:55.928189 kernel: Dynamic Preempt: voluntary May 8 00:38:55.928207 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:38:55.928219 kernel: rcu: RCU event tracing is enabled. May 8 00:38:55.928227 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:38:55.928234 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:38:55.928242 kernel: Rude variant of Tasks RCU enabled. May 8 00:38:55.928250 kernel: Tracing variant of Tasks RCU enabled. May 8 00:38:55.928257 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:38:55.928268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:38:55.928276 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:38:55.928286 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:38:55.928294 kernel: Console: colour dummy device 80x25 May 8 00:38:55.928301 kernel: printk: console [ttyS0] enabled May 8 00:38:55.928312 kernel: ACPI: Core revision 20230628 May 8 00:38:55.928320 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:38:55.928328 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:38:55.928337 kernel: x2apic enabled May 8 00:38:55.928346 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:38:55.928354 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:38:55.928363 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:38:55.928371 kernel: kvm-guest: setup PV IPIs May 8 00:38:55.928379 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:38:55.928390 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:38:55.928398 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:38:55.928405 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:38:55.928413 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:38:55.928420 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:38:55.928428 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:38:55.928436 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:38:55.928443 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:38:55.928451 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:38:55.928462 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:38:55.928470 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:38:55.928478 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:38:55.928486 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:38:55.928502 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:38:55.928515 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:38:55.928530 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:38:55.928538 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:38:55.928551 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:38:55.928558 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:38:55.928566 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:38:55.928574 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:38:55.928581 kernel: Freeing SMP alternatives memory: 32K May 8 00:38:55.928589 kernel: pid_max: default: 32768 minimum: 301 May 8 00:38:55.928596 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:38:55.928604 kernel: landlock: Up and running. May 8 00:38:55.928612 kernel: SELinux: Initializing. May 8 00:38:55.928623 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:38:55.928630 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:38:55.928638 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:38:55.928646 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:38:55.928653 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:38:55.928661 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:38:55.928669 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:38:55.928676 kernel: ... version: 0 May 8 00:38:55.928684 kernel: ... bit width: 48 May 8 00:38:55.928695 kernel: ... generic registers: 6 May 8 00:38:55.928703 kernel: ... value mask: 0000ffffffffffff May 8 00:38:55.928710 kernel: ... max period: 00007fffffffffff May 8 00:38:55.928718 kernel: ... fixed-purpose events: 0 May 8 00:38:55.928725 kernel: ... event mask: 000000000000003f May 8 00:38:55.928733 kernel: signal: max sigframe size: 1776 May 8 00:38:55.928740 kernel: rcu: Hierarchical SRCU implementation. May 8 00:38:55.928748 kernel: rcu: Max phase no-delay instances is 400. May 8 00:38:55.928756 kernel: smp: Bringing up secondary CPUs ... May 8 00:38:55.928767 kernel: smpboot: x86: Booting SMP configuration: May 8 00:38:55.928774 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:38:55.928782 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:38:55.928789 kernel: smpboot: Max logical packages: 1 May 8 00:38:55.928797 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:38:55.928805 kernel: devtmpfs: initialized May 8 00:38:55.928812 kernel: x86/mm: Memory block size: 128MB May 8 00:38:55.928820 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:38:55.928828 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:38:55.928838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 8 00:38:55.928846 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:38:55.928854 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:38:55.928862 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:38:55.928869 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:38:55.928877 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:38:55.928884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:38:55.928892 kernel: audit: initializing netlink subsys (disabled) May 8 00:38:55.928900 kernel: audit: type=2000 audit(1746664735.369:1): state=initialized audit_enabled=0 res=1 May 8 00:38:55.928910 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:38:55.928918 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:38:55.928925 kernel: cpuidle: using governor menu May 8 00:38:55.928933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:38:55.928940 kernel: dca service started, version 1.12.1 May 8 00:38:55.928948 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:38:55.928956 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:38:55.928963 kernel: PCI: Using configuration type 1 for base access May 8 00:38:55.928971 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:38:55.928982 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:38:55.928990 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:38:55.928997 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:38:55.929005 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:38:55.929012 kernel: ACPI: Added _OSI(Module Device) May 8 00:38:55.929020 kernel: ACPI: Added _OSI(Processor Device) May 8 00:38:55.929027 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:38:55.929035 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:38:55.929043 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:38:55.929193 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:38:55.929201 kernel: ACPI: Interpreter enabled May 8 00:38:55.929209 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:38:55.929216 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:38:55.929224 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:38:55.929232 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:38:55.929239 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:38:55.929247 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:38:55.929466 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:38:55.929607 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:38:55.929735 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:38:55.929745 kernel: PCI host bridge to bus 0000:00 May 8 00:38:55.929891 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:38:55.930010 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:38:55.930145 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:38:55.930280 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:38:55.930397 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:38:55.930514 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 8 00:38:55.930629 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:38:55.930815 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:38:55.930959 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:38:55.931112 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:38:55.931255 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:38:55.931381 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:38:55.931507 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:38:55.931633 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:38:55.931789 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:38:55.931919 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:38:55.932069 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:38:55.932225 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 8 00:38:55.932405 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:38:55.932538 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:38:55.932665 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:38:55.932792 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 8 00:38:55.932941 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:38:55.933100 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:38:55.933238 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:38:55.933367 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 8 00:38:55.933495 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:38:55.933649 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:38:55.933778 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:38:55.933923 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:38:55.934081 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:38:55.934220 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:38:55.934373 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:38:55.934505 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:38:55.934515 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:38:55.934523 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:38:55.934532 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:38:55.934545 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:38:55.934552 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:38:55.934560 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:38:55.934568 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:38:55.934576 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:38:55.934583 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:38:55.934591 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:38:55.934599 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:38:55.934607 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:38:55.934618 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:38:55.934626 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:38:55.934634 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:38:55.934642 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:38:55.934649 kernel: iommu: Default domain type: Translated May 8 00:38:55.934657 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:38:55.934665 kernel: efivars: Registered efivars operations May 8 00:38:55.934673 kernel: PCI: Using ACPI for IRQ routing May 8 00:38:55.934681 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:38:55.934692 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:38:55.934699 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 8 00:38:55.934707 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 8 00:38:55.934715 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 8 00:38:55.934842 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:38:55.934969 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:38:55.935163 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:38:55.935174 kernel: vgaarb: loaded May 8 00:38:55.935189 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:38:55.935203 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:38:55.935211 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:38:55.935219 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:38:55.935227 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:38:55.935234 kernel: pnp: PnP ACPI init May 8 00:38:55.935385 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:38:55.935397 kernel: pnp: PnP ACPI: found 6 devices May 8 00:38:55.935405 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:38:55.935417 kernel: NET: Registered PF_INET protocol family May 8 00:38:55.935425 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:38:55.935433 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:38:55.935441 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:38:55.935449 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:38:55.935457 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:38:55.935465 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:38:55.935473 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:38:55.935480 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:38:55.935492 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:38:55.935500 kernel: NET: Registered PF_XDP protocol family May 8 00:38:55.935625 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:38:55.935751 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:38:55.935867 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:38:55.935981 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:38:55.936111 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:38:55.936235 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:38:55.936358 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:38:55.936474 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 8 00:38:55.936484 kernel: PCI: CLS 0 bytes, default 64 May 8 00:38:55.936492 kernel: Initialise system trusted keyrings May 8 00:38:55.936500 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:38:55.936508 kernel: Key type asymmetric registered May 8 00:38:55.936516 kernel: Asymmetric key parser 'x509' registered May 8 00:38:55.936524 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:38:55.936531 kernel: io scheduler mq-deadline registered May 8 00:38:55.936544 kernel: io scheduler kyber registered May 8 00:38:55.936552 kernel: io scheduler bfq registered May 8 00:38:55.936559 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:38:55.936568 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:38:55.936576 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:38:55.936584 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:38:55.936591 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:38:55.936599 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:38:55.936607 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:38:55.936618 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:38:55.936626 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:38:55.936800 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:38:55.936812 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:38:55.936933 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:38:55.937104 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:38:55 UTC (1746664735) May 8 00:38:55.937237 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:38:55.937253 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:38:55.937261 kernel: efifb: probing for efifb May 8 00:38:55.937269 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 8 00:38:55.937277 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 8 00:38:55.937284 kernel: efifb: scrolling: redraw May 8 00:38:55.937292 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 8 00:38:55.937300 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:38:55.937329 kernel: fb0: EFI VGA frame buffer device May 8 00:38:55.937340 kernel: pstore: Using crash dump compression: deflate May 8 00:38:55.937348 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:38:55.937359 kernel: NET: Registered PF_INET6 protocol family May 8 00:38:55.937367 kernel: Segment Routing with IPv6 May 8 00:38:55.937375 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:38:55.937383 kernel: NET: Registered PF_PACKET protocol family May 8 00:38:55.937391 kernel: Key type dns_resolver registered May 8 00:38:55.937399 kernel: IPI shorthand broadcast: enabled May 8 00:38:55.937407 kernel: sched_clock: Marking stable (909003020, 117435890)->(1051823580, -25384670) May 8 00:38:55.937415 kernel: registered taskstats version 1 May 8 00:38:55.937423 kernel: Loading compiled-in X.509 certificates May 8 00:38:55.937435 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:38:55.937443 kernel: Key type .fscrypt registered May 8 00:38:55.937451 kernel: Key type fscrypt-provisioning registered May 8 00:38:55.937460 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:38:55.937468 kernel: ima: Allocated hash algorithm: sha1 May 8 00:38:55.937476 kernel: ima: No architecture policies found May 8 00:38:55.937484 kernel: clk: Disabling unused clocks May 8 00:38:55.937492 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:38:55.937503 kernel: Write protecting the kernel read-only data: 36864k May 8 00:38:55.937511 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:38:55.937519 kernel: Run /init as init process May 8 00:38:55.937527 kernel: with arguments: May 8 00:38:55.937535 kernel: /init May 8 00:38:55.937543 kernel: with environment: May 8 00:38:55.937551 kernel: HOME=/ May 8 00:38:55.937559 kernel: TERM=linux May 8 00:38:55.937568 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:38:55.937581 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:38:55.937591 systemd[1]: Detected virtualization kvm. May 8 00:38:55.937600 systemd[1]: Detected architecture x86-64. May 8 00:38:55.937608 systemd[1]: Running in initrd. May 8 00:38:55.937623 systemd[1]: No hostname configured, using default hostname. May 8 00:38:55.937631 systemd[1]: Hostname set to . May 8 00:38:55.937640 systemd[1]: Initializing machine ID from VM UUID. May 8 00:38:55.937648 systemd[1]: Queued start job for default target initrd.target. May 8 00:38:55.937657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:38:55.937665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:38:55.937674 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:38:55.937683 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:38:55.937695 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:38:55.937704 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:38:55.937714 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:38:55.937723 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:38:55.937731 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:38:55.937740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:38:55.937749 systemd[1]: Reached target paths.target - Path Units. May 8 00:38:55.937761 systemd[1]: Reached target slices.target - Slice Units. May 8 00:38:55.937769 systemd[1]: Reached target swap.target - Swaps. May 8 00:38:55.937778 systemd[1]: Reached target timers.target - Timer Units. May 8 00:38:55.937786 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:38:55.937795 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:38:55.937803 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:38:55.937812 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:38:55.937820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:38:55.937829 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:38:55.937840 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:38:55.937849 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:38:55.937858 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:38:55.937866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:38:55.937875 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:38:55.937883 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:38:55.937892 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:38:55.937900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:38:55.937912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:38:55.937921 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:38:55.937929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:38:55.937938 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:38:55.937947 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:38:55.937978 systemd-journald[189]: Collecting audit messages is disabled. May 8 00:38:55.937997 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:38:55.938006 systemd-journald[189]: Journal started May 8 00:38:55.938027 systemd-journald[189]: Runtime Journal (/run/log/journal/4b5eaa4bda3a48fda08f49deba45bde4) is 6.0M, max 48.3M, 42.2M free. May 8 00:38:55.941181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:38:55.943083 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:38:55.944284 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:38:55.948246 systemd-modules-load[193]: Inserted module 'overlay' May 8 00:38:55.949874 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:38:55.952575 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:38:55.955338 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:38:55.967674 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:38:55.977540 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:38:55.983070 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:38:55.985547 systemd-modules-load[193]: Inserted module 'br_netfilter' May 8 00:38:55.986651 kernel: Bridge firewalling registered May 8 00:38:55.988352 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:38:55.989848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:38:55.993691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:38:56.001767 dracut-cmdline[222]: dracut-dracut-053 May 8 00:38:56.007214 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:38:56.007806 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:38:56.023294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:38:56.055981 systemd-resolved[247]: Positive Trust Anchors: May 8 00:38:56.055998 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:38:56.056037 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:38:56.059339 systemd-resolved[247]: Defaulting to hostname 'linux'. May 8 00:38:56.060844 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:38:56.068449 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:38:56.100089 kernel: SCSI subsystem initialized May 8 00:38:56.111079 kernel: Loading iSCSI transport class v2.0-870. May 8 00:38:56.126089 kernel: iscsi: registered transport (tcp) May 8 00:38:56.154117 kernel: iscsi: registered transport (qla4xxx) May 8 00:38:56.154309 kernel: QLogic iSCSI HBA Driver May 8 00:38:56.213647 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:38:56.223369 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:38:56.251084 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:38:56.251154 kernel: device-mapper: uevent: version 1.0.3 May 8 00:38:56.253031 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:38:56.299077 kernel: raid6: avx2x4 gen() 27811 MB/s May 8 00:38:56.316066 kernel: raid6: avx2x2 gen() 25898 MB/s May 8 00:38:56.333362 kernel: raid6: avx2x1 gen() 22141 MB/s May 8 00:38:56.333384 kernel: raid6: using algorithm avx2x4 gen() 27811 MB/s May 8 00:38:56.351265 kernel: raid6: .... xor() 6691 MB/s, rmw enabled May 8 00:38:56.351295 kernel: raid6: using avx2x2 recovery algorithm May 8 00:38:56.374085 kernel: xor: automatically using best checksumming function avx May 8 00:38:56.534115 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:38:56.550563 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:38:56.562234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:38:56.578819 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 8 00:38:56.583839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:38:56.596294 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:38:56.612709 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 8 00:38:56.653112 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:38:56.671256 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:38:56.740589 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:38:56.759187 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:38:56.775709 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:38:56.778997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:38:56.785200 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:38:56.786775 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:38:56.799256 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:38:56.805073 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:38:56.818548 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:38:56.818794 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:38:56.818807 kernel: GPT:9289727 != 19775487 May 8 00:38:56.818818 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:38:56.818829 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:38:56.818840 kernel: GPT:9289727 != 19775487 May 8 00:38:56.818850 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:38:56.818860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:38:56.811882 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:38:56.837411 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:38:56.837521 kernel: AES CTR mode by8 optimization enabled May 8 00:38:56.840665 kernel: libata version 3.00 loaded. May 8 00:38:56.847088 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:38:56.869545 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:38:56.869569 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:38:56.869730 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (465) May 8 00:38:56.869742 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:38:56.869927 kernel: scsi host0: ahci May 8 00:38:56.870161 kernel: scsi host1: ahci May 8 00:38:56.870319 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) May 8 00:38:56.870331 kernel: scsi host2: ahci May 8 00:38:56.870480 kernel: scsi host3: ahci May 8 00:38:56.870637 kernel: scsi host4: ahci May 8 00:38:56.870796 kernel: scsi host5: ahci May 8 00:38:56.872124 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:38:56.872139 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:38:56.872150 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:38:56.872170 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:38:56.872182 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:38:56.872193 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:38:56.856682 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:38:56.856868 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:38:56.860384 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:38:56.861595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:38:56.861843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:38:56.867648 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:38:56.882374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:38:56.898139 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:38:56.904616 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:38:56.904699 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:38:56.916451 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:38:56.921113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:38:56.930198 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:38:56.930287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:38:56.930341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:38:56.933086 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:38:56.936238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:38:56.949174 disk-uuid[557]: Primary Header is updated. May 8 00:38:56.949174 disk-uuid[557]: Secondary Entries is updated. May 8 00:38:56.949174 disk-uuid[557]: Secondary Header is updated. May 8 00:38:56.952079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:38:56.961260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:38:56.973374 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:38:56.992960 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:38:57.183563 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:38:57.183660 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:38:57.183676 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:38:57.183710 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:38:57.185090 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:38:57.185194 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:38:57.186089 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:38:57.187392 kernel: ata3.00: applying bridge limits May 8 00:38:57.188132 kernel: ata3.00: configured for UDMA/100 May 8 00:38:57.189086 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:38:57.237200 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:38:57.250993 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:38:57.251016 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:38:57.962080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:38:57.962258 disk-uuid[560]: The operation has completed successfully. May 8 00:38:57.993877 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:38:57.994059 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:38:58.016239 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:38:58.021035 sh[597]: Success May 8 00:38:58.035078 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:38:58.074520 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:38:58.090487 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:38:58.092874 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:38:58.106892 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:38:58.106938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:38:58.106953 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:38:58.108108 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:38:58.109745 kernel: BTRFS info (device dm-0): using free space tree May 8 00:38:58.114590 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:38:58.117748 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:38:58.127353 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:38:58.131173 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:38:58.140590 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:38:58.140621 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:38:58.140633 kernel: BTRFS info (device vda6): using free space tree May 8 00:38:58.144077 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:38:58.156555 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:38:58.159284 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:38:58.172610 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:38:58.180237 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:38:58.249548 ignition[691]: Ignition 2.19.0 May 8 00:38:58.249574 ignition[691]: Stage: fetch-offline May 8 00:38:58.249679 ignition[691]: no configs at "/usr/lib/ignition/base.d" May 8 00:38:58.249707 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:58.249894 ignition[691]: parsed url from cmdline: "" May 8 00:38:58.249898 ignition[691]: no config URL provided May 8 00:38:58.249905 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:38:58.249915 ignition[691]: no config at "/usr/lib/ignition/user.ign" May 8 00:38:58.250150 ignition[691]: op(1): [started] loading QEMU firmware config module May 8 00:38:58.250156 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:38:58.259073 ignition[691]: op(1): [finished] loading QEMU firmware config module May 8 00:38:58.274963 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:38:58.282202 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:38:58.305182 ignition[691]: parsing config with SHA512: 0568dae54c773e37c351f3a6a1925b17dc9b19bdeee498acf16a81347ae061d7ff97f6535f2a3b4243910f55aea14dad885732303136bfa890c575c62983aa84 May 8 00:38:58.308810 systemd-networkd[785]: lo: Link UP May 8 00:38:58.308819 systemd-networkd[785]: lo: Gained carrier May 8 00:38:58.309404 ignition[691]: fetch-offline: fetch-offline passed May 8 00:38:58.309040 unknown[691]: fetched base config from "system" May 8 00:38:58.309651 ignition[691]: Ignition finished successfully May 8 00:38:58.309064 unknown[691]: fetched user config from "qemu" May 8 00:38:58.310546 systemd-networkd[785]: Enumeration completed May 8 00:38:58.310731 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:38:58.311092 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:38:58.311096 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:38:58.311975 systemd-networkd[785]: eth0: Link UP May 8 00:38:58.311979 systemd-networkd[785]: eth0: Gained carrier May 8 00:38:58.311985 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:38:58.314372 systemd[1]: Reached target network.target - Network. May 8 00:38:58.330601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:38:58.332101 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:38:58.332276 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:38:58.348256 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:38:58.367490 ignition[789]: Ignition 2.19.0 May 8 00:38:58.367508 ignition[789]: Stage: kargs May 8 00:38:58.367722 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 8 00:38:58.367739 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:58.372518 ignition[789]: kargs: kargs passed May 8 00:38:58.373347 ignition[789]: Ignition finished successfully May 8 00:38:58.377802 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:38:58.394227 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:38:58.407696 ignition[798]: Ignition 2.19.0 May 8 00:38:58.407707 ignition[798]: Stage: disks May 8 00:38:58.407869 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 8 00:38:58.407881 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:58.408774 ignition[798]: disks: disks passed May 8 00:38:58.408819 ignition[798]: Ignition finished successfully May 8 00:38:58.414502 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:38:58.414788 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:38:58.417611 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:38:58.419862 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:38:58.421976 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:38:58.422378 systemd[1]: Reached target basic.target - Basic System. May 8 00:38:58.436197 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:38:58.449970 systemd-resolved[247]: Detected conflict on linux IN A 10.0.0.74 May 8 00:38:58.449991 systemd-resolved[247]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 8 00:38:58.453668 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:38:58.471207 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:38:58.482167 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:38:58.569090 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:38:58.570164 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:38:58.571026 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:38:58.581225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:38:58.583453 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:38:58.583885 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:38:58.583939 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:38:58.595852 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 8 00:38:58.595874 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:38:58.595892 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:38:58.595903 kernel: BTRFS info (device vda6): using free space tree May 8 00:38:58.583970 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:38:58.599167 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:38:58.592002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:38:58.596747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:38:58.600934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:38:58.639350 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:38:58.644281 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 8 00:38:58.649006 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:38:58.653273 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:38:58.754248 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:38:58.769149 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:38:58.769861 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:38:58.781075 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:38:58.798574 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:38:58.819981 ignition[931]: INFO : Ignition 2.19.0 May 8 00:38:58.819981 ignition[931]: INFO : Stage: mount May 8 00:38:58.823232 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:38:58.823232 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:58.823232 ignition[931]: INFO : mount: mount passed May 8 00:38:58.823232 ignition[931]: INFO : Ignition finished successfully May 8 00:38:58.823210 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:38:58.836207 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:38:59.105756 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:38:59.122262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:38:59.129887 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) May 8 00:38:59.129915 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:38:59.129936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:38:59.130775 kernel: BTRFS info (device vda6): using free space tree May 8 00:38:59.134069 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:38:59.135778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:38:59.172033 ignition[960]: INFO : Ignition 2.19.0 May 8 00:38:59.172033 ignition[960]: INFO : Stage: files May 8 00:38:59.173880 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:38:59.173880 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:38:59.176561 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 8 00:38:59.177912 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:38:59.177912 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:38:59.181513 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:38:59.183076 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:38:59.184883 unknown[960]: wrote ssh authorized keys file for user: core May 8 00:38:59.186068 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:38:59.188368 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:38:59.190435 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:38:59.234860 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:38:59.415723 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:38:59.415723 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:38:59.420492 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:38:59.940010 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:39:00.023230 systemd-networkd[785]: eth0: Gained IPv6LL May 8 00:39:00.479745 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:00.482508 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:39:00.483834 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:00.486133 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:00.486133 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:39:00.486133 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:39:00.490488 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:39:00.492725 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:39:00.492725 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:39:00.495926 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:39:00.525376 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:39:00.533846 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:39:00.535620 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:39:00.535620 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 8 00:39:00.538601 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:39:00.540141 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:00.541969 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:00.543664 ignition[960]: INFO : files: files passed May 8 00:39:00.544434 ignition[960]: INFO : Ignition finished successfully May 8 00:39:00.547237 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:39:00.556176 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:39:00.558599 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:39:00.560757 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:39:00.560883 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:39:00.575530 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:39:00.579083 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:00.579083 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:00.583679 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:00.581315 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:00.583874 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:39:00.602181 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:39:00.629262 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:39:00.629410 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:39:00.631926 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:39:00.634351 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:39:00.635600 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:39:00.650182 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:39:00.664231 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:00.673211 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:39:00.682233 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:00.683527 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:00.685796 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:39:00.687883 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:39:00.687997 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:00.690412 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:39:00.691980 systemd[1]: Stopped target basic.target - Basic System. May 8 00:39:00.694042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:39:00.696134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:00.698173 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:39:00.700350 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:39:00.702525 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:00.704846 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:39:00.706875 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:39:00.709116 systemd[1]: Stopped target swap.target - Swaps. May 8 00:39:00.710923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:39:00.711040 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:00.713404 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:00.714862 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:00.717025 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:39:00.717147 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:00.719365 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:39:00.719479 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:39:00.722103 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:39:00.722216 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:00.724117 systemd[1]: Stopped target paths.target - Path Units. May 8 00:39:00.725897 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:39:00.731114 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:00.732866 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:39:00.734627 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:39:00.736683 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:39:00.736786 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:00.739266 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:39:00.739377 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:00.741207 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:39:00.741323 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:00.743312 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:39:00.743419 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:39:00.751194 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:39:00.753033 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:39:00.754158 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:39:00.754283 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:00.756515 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:39:00.756667 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:00.762580 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:39:00.762701 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:39:00.767411 ignition[1015]: INFO : Ignition 2.19.0 May 8 00:39:00.767411 ignition[1015]: INFO : Stage: umount May 8 00:39:00.767411 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:00.767411 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:00.772208 ignition[1015]: INFO : umount: umount passed May 8 00:39:00.772208 ignition[1015]: INFO : Ignition finished successfully May 8 00:39:00.774021 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:39:00.774177 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:39:00.777715 systemd[1]: Stopped target network.target - Network. May 8 00:39:00.779755 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:39:00.779817 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:39:00.782014 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:39:00.782118 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:39:00.784694 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:39:00.784747 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:39:00.787150 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:39:00.787207 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:39:00.789690 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:39:00.792201 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:39:00.796167 systemd-networkd[785]: eth0: DHCPv6 lease lost May 8 00:39:00.796282 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:39:00.803893 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:39:00.804977 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:39:00.808577 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:39:00.809728 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:39:00.813460 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:39:00.813520 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:00.828327 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:39:00.829413 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:39:00.830550 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:00.832933 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:39:00.834393 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:00.836448 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:39:00.836502 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:39:00.840674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:39:00.840739 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:00.844423 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:00.857771 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:39:00.857972 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:39:00.859585 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:39:00.859806 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:00.862110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:39:00.862223 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:39:00.864337 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:39:00.864407 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:00.867878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:39:00.867972 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:00.872145 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:39:00.872218 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:39:00.875998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:00.876079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:00.893532 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:39:00.896431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:39:00.897799 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:00.900993 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:39:00.902402 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:00.905221 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:39:00.905288 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:00.908672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:00.909716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:00.912491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:39:00.913644 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:39:01.237305 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:39:01.237460 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:39:01.239737 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:39:01.241648 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:39:01.241723 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:39:01.257299 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:39:01.267902 systemd[1]: Switching root. May 8 00:39:01.307975 systemd-journald[189]: Journal stopped May 8 00:39:03.089290 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). May 8 00:39:03.089416 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:39:03.089441 kernel: SELinux: policy capability open_perms=1 May 8 00:39:03.089456 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:39:03.089470 kernel: SELinux: policy capability always_check_network=0 May 8 00:39:03.089484 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:39:03.089499 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:39:03.089522 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:39:03.089536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:39:03.089551 kernel: audit: type=1403 audit(1746664742.190:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:39:03.089590 systemd[1]: Successfully loaded SELinux policy in 40.328ms. May 8 00:39:03.089620 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.040ms. May 8 00:39:03.089638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:39:03.089655 systemd[1]: Detected virtualization kvm. May 8 00:39:03.089671 systemd[1]: Detected architecture x86-64. May 8 00:39:03.089686 systemd[1]: Detected first boot. May 8 00:39:03.089702 systemd[1]: Initializing machine ID from VM UUID. May 8 00:39:03.089717 zram_generator::config[1059]: No configuration found. May 8 00:39:03.089745 systemd[1]: Populated /etc with preset unit settings. May 8 00:39:03.089761 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:39:03.089777 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:39:03.089794 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:39:03.089811 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:39:03.089828 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:39:03.089845 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:39:03.089861 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:39:03.089890 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:39:03.089907 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:39:03.089933 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:39:03.089950 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:39:03.089966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:03.089983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:03.090000 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:39:03.090030 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:39:03.090062 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:39:03.090093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:03.090110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:39:03.090122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:03.090134 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:39:03.090147 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:39:03.090159 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:39:03.090171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:39:03.090183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:03.090205 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:03.090221 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:03.090234 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:03.090250 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:39:03.090281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:39:03.090307 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:03.090340 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:03.090356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:03.090373 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:39:03.090410 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:39:03.090427 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:39:03.090443 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:39:03.090459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:03.090476 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:39:03.090493 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:39:03.090509 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:39:03.090524 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:39:03.090540 systemd[1]: Reached target machines.target - Containers. May 8 00:39:03.090573 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:39:03.090590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:03.090607 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:03.090622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:39:03.090639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:03.090655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:03.090671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:03.090686 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:39:03.090710 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:03.090733 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:39:03.090748 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:39:03.090763 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:39:03.090778 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:39:03.090793 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:39:03.090807 kernel: fuse: init (API version 7.39) May 8 00:39:03.090822 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:03.090837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:03.090865 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:39:03.090881 kernel: loop: module loaded May 8 00:39:03.090896 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:39:03.090911 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:03.090926 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:39:03.090941 systemd[1]: Stopped verity-setup.service. May 8 00:39:03.090956 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:03.090997 systemd-journald[1129]: Collecting audit messages is disabled. May 8 00:39:03.091078 kernel: ACPI: bus type drm_connector registered May 8 00:39:03.091111 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:39:03.091132 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:39:03.091148 systemd-journald[1129]: Journal started May 8 00:39:03.091189 systemd-journald[1129]: Runtime Journal (/run/log/journal/4b5eaa4bda3a48fda08f49deba45bde4) is 6.0M, max 48.3M, 42.2M free. May 8 00:39:02.777289 systemd[1]: Queued start job for default target multi-user.target. May 8 00:39:02.796564 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:39:02.797103 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:39:03.095089 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:03.096891 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:39:03.098294 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:39:03.099814 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:39:03.101436 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:39:03.103020 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:39:03.104771 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:03.106708 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:39:03.106923 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:39:03.108697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:03.108916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:03.110653 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:03.110873 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:03.112524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:03.112704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:03.114515 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:39:03.114695 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:39:03.116387 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:03.116568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:03.118387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:03.120458 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:39:03.277765 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:39:03.293714 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:39:03.302136 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:39:03.304653 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:39:03.305939 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:39:03.305969 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:03.308491 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:39:03.311505 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:39:03.313946 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:39:03.315259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:03.323168 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:39:03.325870 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:39:03.327204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:03.332240 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:39:03.333647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:03.338182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:03.342970 systemd-journald[1129]: Time spent on flushing to /var/log/journal/4b5eaa4bda3a48fda08f49deba45bde4 is 80.437ms for 995 entries. May 8 00:39:03.342970 systemd-journald[1129]: System Journal (/var/log/journal/4b5eaa4bda3a48fda08f49deba45bde4) is 8.0M, max 195.6M, 187.6M free. May 8 00:39:03.455197 systemd-journald[1129]: Received client request to flush runtime journal. May 8 00:39:03.455241 kernel: loop0: detected capacity change from 0 to 140768 May 8 00:39:03.343625 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:39:03.357784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:03.361026 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:39:03.363611 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:39:03.365472 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:39:03.368162 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:39:03.376296 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:39:03.404753 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:39:03.406364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:03.429387 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:39:03.435532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:03.456767 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:39:03.470289 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:39:03.472775 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 8 00:39:03.472797 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 8 00:39:03.480392 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:03.485074 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:39:03.489283 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:39:03.493946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:39:03.494674 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:39:03.534535 kernel: loop1: detected capacity change from 0 to 142488 May 8 00:39:03.557983 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:39:03.647102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:03.652082 kernel: loop2: detected capacity change from 0 to 210664 May 8 00:39:03.682454 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 8 00:39:03.682488 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 8 00:39:03.716358 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:03.729090 kernel: loop3: detected capacity change from 0 to 140768 May 8 00:39:03.750105 kernel: loop4: detected capacity change from 0 to 142488 May 8 00:39:03.761074 kernel: loop5: detected capacity change from 0 to 210664 May 8 00:39:03.765243 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:39:03.766912 (sd-merge)[1200]: Merged extensions into '/usr'. May 8 00:39:03.770761 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:39:03.770778 systemd[1]: Reloading... May 8 00:39:03.884077 zram_generator::config[1227]: No configuration found. May 8 00:39:04.013768 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:39:04.055609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:04.106013 systemd[1]: Reloading finished in 334 ms. May 8 00:39:04.146959 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:39:04.149244 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:39:04.163269 systemd[1]: Starting ensure-sysext.service... May 8 00:39:04.165871 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:04.173363 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... May 8 00:39:04.173383 systemd[1]: Reloading... May 8 00:39:04.204932 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:39:04.205414 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:39:04.206500 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:39:04.206805 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 8 00:39:04.206894 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 8 00:39:04.213478 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:04.213495 systemd-tmpfiles[1264]: Skipping /boot May 8 00:39:04.354098 zram_generator::config[1323]: No configuration found. May 8 00:39:04.363686 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:04.363711 systemd-tmpfiles[1264]: Skipping /boot May 8 00:39:04.436161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:04.489573 systemd[1]: Reloading finished in 315 ms. May 8 00:39:04.522904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:04.538393 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:39:04.541935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:39:04.544943 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:39:04.550741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:04.556410 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:39:04.560850 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:39:04.566237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:04.566518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:04.569273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:04.572671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:04.579140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:04.580297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:04.582334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:04.587286 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:39:04.588727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:04.590695 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:39:04.595011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:04.595243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:04.597003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:04.597261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:04.599074 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:04.599265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:04.601244 augenrules[1355]: No rules May 8 00:39:04.603604 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:39:04.611557 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:04.611843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:04.621610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:04.622090 systemd-udevd[1351]: Using default interface naming scheme 'v255'. May 8 00:39:04.626629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:04.632178 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:04.633590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:04.636539 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:39:04.639131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:04.640601 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:39:04.642581 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:39:04.648156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:04.648396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:04.650598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:04.650848 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:04.652832 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:04.653167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:04.655126 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:04.661135 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:39:04.663872 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:39:04.684671 systemd[1]: Finished ensure-sysext.service. May 8 00:39:04.687884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:04.688716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:04.697439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:04.704244 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:04.705063 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1393) May 8 00:39:04.707627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:04.718399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:04.721354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:04.724232 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:04.729859 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:39:04.734207 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:39:04.734243 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:04.734894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:04.735115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:04.738431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:04.738621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:04.740447 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:04.740648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:04.751171 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:39:04.755024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:04.755107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:04.883991 systemd-resolved[1338]: Positive Trust Anchors: May 8 00:39:04.884020 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:04.884066 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:04.891325 systemd-resolved[1338]: Defaulting to hostname 'linux'. May 8 00:39:04.897086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:04.898634 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:04.898896 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:04.901916 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:04.940082 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:39:04.944359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:39:04.948077 kernel: ACPI: button: Power Button [PWRF] May 8 00:39:04.957238 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:39:04.968377 systemd-networkd[1405]: lo: Link UP May 8 00:39:04.968388 systemd-networkd[1405]: lo: Gained carrier May 8 00:39:04.970773 systemd-networkd[1405]: Enumeration completed May 8 00:39:04.970887 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:04.972243 systemd[1]: Reached target network.target - Network. May 8 00:39:04.974357 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:04.974368 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:04.975619 systemd-networkd[1405]: eth0: Link UP May 8 00:39:04.975630 systemd-networkd[1405]: eth0: Gained carrier May 8 00:39:04.975642 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:04.978248 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:39:04.981584 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:39:04.984493 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:39:04.984798 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:39:04.989728 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:39:04.985937 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:39:04.987627 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:39:04.992395 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:39:04.996149 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:39:04.997221 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. May 8 00:39:05.743762 systemd-resolved[1338]: Clock change detected. Flushing caches. May 8 00:39:05.743928 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:39:05.744019 systemd-timesyncd[1406]: Initial clock synchronization to Thu 2025-05-08 00:39:05.743582 UTC. May 8 00:39:05.756683 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:39:05.761946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:05.766678 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:39:05.777837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:05.778987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:05.872983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:05.921915 kernel: kvm_amd: TSC scaling supported May 8 00:39:05.921990 kernel: kvm_amd: Nested Virtualization enabled May 8 00:39:05.922005 kernel: kvm_amd: Nested Paging enabled May 8 00:39:05.923201 kernel: kvm_amd: LBR virtualization supported May 8 00:39:05.923226 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:39:05.924782 kernel: kvm_amd: Virtual GIF supported May 8 00:39:05.942691 kernel: EDAC MC: Ver: 3.0.0 May 8 00:39:05.955747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:05.996972 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:39:06.006999 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:39:06.017731 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:06.049079 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:39:06.050840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:06.052183 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:06.053527 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:39:06.055125 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:39:06.056754 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:39:06.058062 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:39:06.059490 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:39:06.060902 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:39:06.060928 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:06.061952 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:06.063947 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:39:06.066937 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:39:06.073155 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:39:06.075717 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:39:06.077302 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:39:06.078466 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:06.079437 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:06.080399 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:06.080427 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:06.081483 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:39:06.083582 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:39:06.087786 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:39:06.090823 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:39:06.092744 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:39:06.095829 jq[1446]: false May 8 00:39:06.096031 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:06.096411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:39:06.100212 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:39:06.103145 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:39:06.107296 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:39:06.116085 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:39:06.116844 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:39:06.117434 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:39:06.120850 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:39:06.123812 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:39:06.127637 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:39:06.128027 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:39:06.129031 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:39:06.129734 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:39:06.130408 dbus-daemon[1445]: [system] SELinux support is enabled May 8 00:39:06.131917 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:39:06.139698 jq[1456]: true May 8 00:39:06.137271 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:39:06.146310 extend-filesystems[1447]: Found loop3 May 8 00:39:06.146310 extend-filesystems[1447]: Found loop4 May 8 00:39:06.146310 extend-filesystems[1447]: Found loop5 May 8 00:39:06.146310 extend-filesystems[1447]: Found sr0 May 8 00:39:06.146310 extend-filesystems[1447]: Found vda May 8 00:39:06.146310 extend-filesystems[1447]: Found vda1 May 8 00:39:06.146310 extend-filesystems[1447]: Found vda2 May 8 00:39:06.146310 extend-filesystems[1447]: Found vda3 May 8 00:39:06.146310 extend-filesystems[1447]: Found usr May 8 00:39:06.146310 extend-filesystems[1447]: Found vda4 May 8 00:39:06.146310 extend-filesystems[1447]: Found vda6 May 8 00:39:06.146310 extend-filesystems[1447]: Found vda7 May 8 00:39:06.146310 extend-filesystems[1447]: Found vda9 May 8 00:39:06.146310 extend-filesystems[1447]: Checking size of /dev/vda9 May 8 00:39:06.208183 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1381) May 8 00:39:06.208229 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:39:06.208508 extend-filesystems[1447]: Resized partition /dev/vda9 May 8 00:39:06.148985 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:39:06.210528 tar[1459]: linux-amd64/helm May 8 00:39:06.215484 update_engine[1455]: I20250508 00:39:06.170629 1455 main.cc:92] Flatcar Update Engine starting May 8 00:39:06.215484 update_engine[1455]: I20250508 00:39:06.180898 1455 update_check_scheduler.cc:74] Next update check in 9m58s May 8 00:39:06.215852 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) May 8 00:39:06.149247 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:39:06.217090 jq[1463]: true May 8 00:39:06.166544 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:39:06.166585 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:39:06.169336 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:39:06.169354 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:39:06.176368 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:39:06.178729 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:39:06.178750 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:39:06.179289 systemd[1]: Started update-engine.service - Update Engine. May 8 00:39:06.181287 systemd-logind[1453]: New seat seat0. May 8 00:39:06.196597 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:39:06.197860 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:39:06.319148 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:39:06.447947 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:39:06.478685 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:39:06.480388 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:39:06.500160 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:39:06.507909 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:39:06.508206 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:39:06.512951 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:39:06.533829 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:39:06.551256 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:39:06.554074 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:39:06.555388 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:39:06.700145 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:39:06.700145 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:39:06.700145 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:39:06.706996 extend-filesystems[1447]: Resized filesystem in /dev/vda9 May 8 00:39:06.700957 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:39:06.701215 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:39:06.738086 bash[1497]: Updated "/home/core/.ssh/authorized_keys" May 8 00:39:06.740392 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:39:06.742992 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:39:06.812594 containerd[1464]: time="2025-05-08T00:39:06.812449519Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:39:06.837686 containerd[1464]: time="2025-05-08T00:39:06.837615543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:06.839912 containerd[1464]: time="2025-05-08T00:39:06.839869521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:06.839912 containerd[1464]: time="2025-05-08T00:39:06.839899598Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:39:06.839912 containerd[1464]: time="2025-05-08T00:39:06.839914987Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:39:06.840187 containerd[1464]: time="2025-05-08T00:39:06.840161399Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:39:06.840296 containerd[1464]: time="2025-05-08T00:39:06.840184362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:39:06.840296 containerd[1464]: time="2025-05-08T00:39:06.840263110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:06.840296 containerd[1464]: time="2025-05-08T00:39:06.840275583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:06.840521 containerd[1464]: time="2025-05-08T00:39:06.840495546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:06.840521 containerd[1464]: time="2025-05-08T00:39:06.840515373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:39:06.840602 containerd[1464]: time="2025-05-08T00:39:06.840529309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:06.840602 containerd[1464]: time="2025-05-08T00:39:06.840539348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:39:06.840695 containerd[1464]: time="2025-05-08T00:39:06.840676074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:06.840969 containerd[1464]: time="2025-05-08T00:39:06.840941642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:06.841103 containerd[1464]: time="2025-05-08T00:39:06.841075363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:06.841103 containerd[1464]: time="2025-05-08T00:39:06.841093838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:39:06.841243 containerd[1464]: time="2025-05-08T00:39:06.841217921Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:39:06.841310 containerd[1464]: time="2025-05-08T00:39:06.841288964Z" level=info msg="metadata content store policy set" policy=shared May 8 00:39:06.848082 containerd[1464]: time="2025-05-08T00:39:06.848045978Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:39:06.848134 containerd[1464]: time="2025-05-08T00:39:06.848091944Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:39:06.848134 containerd[1464]: time="2025-05-08T00:39:06.848108806Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:39:06.848134 containerd[1464]: time="2025-05-08T00:39:06.848124185Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:39:06.848206 containerd[1464]: time="2025-05-08T00:39:06.848140626Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:39:06.848310 containerd[1464]: time="2025-05-08T00:39:06.848280889Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:39:06.848525 containerd[1464]: time="2025-05-08T00:39:06.848497645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:39:06.848682 containerd[1464]: time="2025-05-08T00:39:06.848631005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:39:06.848682 containerd[1464]: time="2025-05-08T00:39:06.848665851Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:39:06.848682 containerd[1464]: time="2025-05-08T00:39:06.848679877Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:39:06.848763 containerd[1464]: time="2025-05-08T00:39:06.848693903Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848763 containerd[1464]: time="2025-05-08T00:39:06.848707709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848763 containerd[1464]: time="2025-05-08T00:39:06.848720293Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848763 containerd[1464]: time="2025-05-08T00:39:06.848733538Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848763 containerd[1464]: time="2025-05-08T00:39:06.848747273Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848763 containerd[1464]: time="2025-05-08T00:39:06.848760709Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848774985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848786948Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848839737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848854444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848866838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848881415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848894409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:39:06.848912 containerd[1464]: time="2025-05-08T00:39:06.848909217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.848922562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.848936108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.848949362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.848975532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.848987785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.848999737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849013513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849028531Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849047957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849059689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849070830Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849129009Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849148256Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:39:06.849336 containerd[1464]: time="2025-05-08T00:39:06.849158936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:39:06.849710 containerd[1464]: time="2025-05-08T00:39:06.849169736Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:39:06.849710 containerd[1464]: time="2025-05-08T00:39:06.849178823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849710 containerd[1464]: time="2025-05-08T00:39:06.849190966Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:39:06.849710 containerd[1464]: time="2025-05-08T00:39:06.849201115Z" level=info msg="NRI interface is disabled by configuration." May 8 00:39:06.849710 containerd[1464]: time="2025-05-08T00:39:06.849210613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:39:06.849849 containerd[1464]: time="2025-05-08T00:39:06.849465330Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:39:06.849849 containerd[1464]: time="2025-05-08T00:39:06.849512649Z" level=info msg="Connect containerd service" May 8 00:39:06.849849 containerd[1464]: time="2025-05-08T00:39:06.849561300Z" level=info msg="using legacy CRI server" May 8 00:39:06.849849 containerd[1464]: time="2025-05-08T00:39:06.849568684Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:39:06.849849 containerd[1464]: time="2025-05-08T00:39:06.849690503Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850338739Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850495222Z" level=info msg="Start subscribing containerd event" May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850736795Z" level=info msg="Start recovering state" May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850748908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850803110Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850835781Z" level=info msg="Start event monitor" May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850865797Z" level=info msg="Start snapshots syncer" May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850878892Z" level=info msg="Start cni network conf syncer for default" May 8 00:39:06.851223 containerd[1464]: time="2025-05-08T00:39:06.850898048Z" level=info msg="Start streaming server" May 8 00:39:06.851098 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:39:06.851722 containerd[1464]: time="2025-05-08T00:39:06.851624300Z" level=info msg="containerd successfully booted in 0.042127s" May 8 00:39:06.911823 systemd-networkd[1405]: eth0: Gained IPv6LL May 8 00:39:06.915938 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:39:06.917975 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:39:06.936935 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:39:06.939964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:06.942759 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:39:06.996414 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:39:06.996798 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:39:06.998817 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:39:07.006777 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:39:07.077573 tar[1459]: linux-amd64/LICENSE May 8 00:39:07.077695 tar[1459]: linux-amd64/README.md May 8 00:39:07.092625 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:39:08.252787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:08.255110 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:39:08.256913 systemd[1]: Startup finished in 1.046s (kernel) + 6.477s (initrd) + 5.361s (userspace) = 12.886s. May 8 00:39:08.261372 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:08.901009 kubelet[1558]: E0508 00:39:08.900919 1558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:08.905502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:08.905743 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:08.906100 systemd[1]: kubelet.service: Consumed 1.584s CPU time. May 8 00:39:09.497449 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:39:09.499206 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:35410.service - OpenSSH per-connection server daemon (10.0.0.1:35410). May 8 00:39:09.544856 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 35410 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:09.547232 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:09.556744 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:39:09.567914 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:39:09.570250 systemd-logind[1453]: New session 1 of user core. May 8 00:39:09.583000 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:39:09.585997 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:39:09.595695 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:09.718606 systemd[1576]: Queued start job for default target default.target. May 8 00:39:09.723310 systemd[1576]: Created slice app.slice - User Application Slice. May 8 00:39:09.723342 systemd[1576]: Reached target paths.target - Paths. May 8 00:39:09.723357 systemd[1576]: Reached target timers.target - Timers. May 8 00:39:09.725155 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:39:09.738567 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:39:09.738719 systemd[1576]: Reached target sockets.target - Sockets. May 8 00:39:09.738740 systemd[1576]: Reached target basic.target - Basic System. May 8 00:39:09.738781 systemd[1576]: Reached target default.target - Main User Target. May 8 00:39:09.738819 systemd[1576]: Startup finished in 135ms. May 8 00:39:09.739342 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:39:09.741173 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:39:09.810365 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:35426.service - OpenSSH per-connection server daemon (10.0.0.1:35426). May 8 00:39:09.848454 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 35426 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:09.850188 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:09.854418 systemd-logind[1453]: New session 2 of user core. May 8 00:39:09.862784 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:39:09.919380 sshd[1587]: pam_unix(sshd:session): session closed for user core May 8 00:39:09.931757 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:35426.service: Deactivated successfully. May 8 00:39:09.933682 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:39:09.935374 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. May 8 00:39:09.936835 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:35442.service - OpenSSH per-connection server daemon (10.0.0.1:35442). May 8 00:39:09.937742 systemd-logind[1453]: Removed session 2. May 8 00:39:09.973074 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 35442 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:09.975133 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:09.980208 systemd-logind[1453]: New session 3 of user core. May 8 00:39:09.993910 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:39:10.049042 sshd[1594]: pam_unix(sshd:session): session closed for user core May 8 00:39:10.060545 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:35442.service: Deactivated successfully. May 8 00:39:10.062737 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:39:10.065282 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. May 8 00:39:10.075374 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:35452.service - OpenSSH per-connection server daemon (10.0.0.1:35452). May 8 00:39:10.077270 systemd-logind[1453]: Removed session 3. May 8 00:39:10.108315 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 35452 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:10.111118 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:10.117106 systemd-logind[1453]: New session 4 of user core. May 8 00:39:10.131081 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:39:10.191473 sshd[1601]: pam_unix(sshd:session): session closed for user core May 8 00:39:10.207169 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:35452.service: Deactivated successfully. May 8 00:39:10.209161 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:39:10.210899 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. May 8 00:39:10.212094 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:35458.service - OpenSSH per-connection server daemon (10.0.0.1:35458). May 8 00:39:10.213003 systemd-logind[1453]: Removed session 4. May 8 00:39:10.245864 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 35458 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:10.247819 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:10.252559 systemd-logind[1453]: New session 5 of user core. May 8 00:39:10.261915 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:39:10.326376 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:39:10.326867 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:10.347736 sudo[1611]: pam_unix(sudo:session): session closed for user root May 8 00:39:10.350499 sshd[1608]: pam_unix(sshd:session): session closed for user core May 8 00:39:10.363609 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:35458.service: Deactivated successfully. May 8 00:39:10.366345 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:39:10.368768 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. May 8 00:39:10.387299 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). May 8 00:39:10.388989 systemd-logind[1453]: Removed session 5. May 8 00:39:10.421246 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:10.423558 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:10.429229 systemd-logind[1453]: New session 6 of user core. May 8 00:39:10.438960 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:39:10.495452 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:39:10.495914 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:10.500130 sudo[1620]: pam_unix(sudo:session): session closed for user root May 8 00:39:10.506906 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:39:10.507256 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:10.532002 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:39:10.533947 auditctl[1623]: No rules May 8 00:39:10.535359 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:39:10.535673 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:39:10.537691 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:39:10.582955 augenrules[1641]: No rules May 8 00:39:10.584896 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:39:10.586406 sudo[1619]: pam_unix(sudo:session): session closed for user root May 8 00:39:10.588339 sshd[1616]: pam_unix(sshd:session): session closed for user core May 8 00:39:10.602042 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:35472.service: Deactivated successfully. May 8 00:39:10.604081 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:39:10.605835 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. May 8 00:39:10.615102 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:35484.service - OpenSSH per-connection server daemon (10.0.0.1:35484). May 8 00:39:10.616357 systemd-logind[1453]: Removed session 6. May 8 00:39:10.647045 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 35484 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:10.649104 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:10.653894 systemd-logind[1453]: New session 7 of user core. May 8 00:39:10.663912 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:39:10.719674 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:39:10.720051 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:11.198950 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:39:11.199141 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:39:11.987106 dockerd[1669]: time="2025-05-08T00:39:11.986929915Z" level=info msg="Starting up" May 8 00:39:12.539525 dockerd[1669]: time="2025-05-08T00:39:12.539403207Z" level=info msg="Loading containers: start." May 8 00:39:12.669690 kernel: Initializing XFRM netlink socket May 8 00:39:12.752362 systemd-networkd[1405]: docker0: Link UP May 8 00:39:12.777236 dockerd[1669]: time="2025-05-08T00:39:12.777176452Z" level=info msg="Loading containers: done." May 8 00:39:12.802174 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3308410352-merged.mount: Deactivated successfully. May 8 00:39:12.804447 dockerd[1669]: time="2025-05-08T00:39:12.804386300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:39:12.804532 dockerd[1669]: time="2025-05-08T00:39:12.804514841Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:39:12.804699 dockerd[1669]: time="2025-05-08T00:39:12.804674030Z" level=info msg="Daemon has completed initialization" May 8 00:39:12.855031 dockerd[1669]: time="2025-05-08T00:39:12.854913414Z" level=info msg="API listen on /run/docker.sock" May 8 00:39:12.855197 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:39:14.039566 containerd[1464]: time="2025-05-08T00:39:14.039493248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:39:14.763195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322907477.mount: Deactivated successfully. May 8 00:39:16.179892 containerd[1464]: time="2025-05-08T00:39:16.179819147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:16.183230 containerd[1464]: time="2025-05-08T00:39:16.183163530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:39:16.186814 containerd[1464]: time="2025-05-08T00:39:16.186748745Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:16.190331 containerd[1464]: time="2025-05-08T00:39:16.190283475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:16.194305 containerd[1464]: time="2025-05-08T00:39:16.194227293Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.154654326s" May 8 00:39:16.194366 containerd[1464]: time="2025-05-08T00:39:16.194332340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:39:16.224093 containerd[1464]: time="2025-05-08T00:39:16.224043780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:39:18.687921 containerd[1464]: time="2025-05-08T00:39:18.687845828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:18.688698 containerd[1464]: time="2025-05-08T00:39:18.688641631Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:39:18.690065 containerd[1464]: time="2025-05-08T00:39:18.690018794Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:18.692762 containerd[1464]: time="2025-05-08T00:39:18.692720592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:18.693807 containerd[1464]: time="2025-05-08T00:39:18.693771964Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.469678661s" May 8 00:39:18.693841 containerd[1464]: time="2025-05-08T00:39:18.693807521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:39:18.724930 containerd[1464]: time="2025-05-08T00:39:18.724869424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:39:18.961712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:39:18.987088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:19.187759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:19.192893 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:19.287020 kubelet[1904]: E0508 00:39:19.286867 1904 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:19.294872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:19.295164 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:20.059728 containerd[1464]: time="2025-05-08T00:39:20.059632708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:20.060580 containerd[1464]: time="2025-05-08T00:39:20.060502670Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:39:20.062070 containerd[1464]: time="2025-05-08T00:39:20.062038912Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:20.065086 containerd[1464]: time="2025-05-08T00:39:20.065050020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:20.066021 containerd[1464]: time="2025-05-08T00:39:20.065964325Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.34104663s" May 8 00:39:20.066021 containerd[1464]: time="2025-05-08T00:39:20.066010501Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:39:20.090752 containerd[1464]: time="2025-05-08T00:39:20.090698900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:39:21.211479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076323668.mount: Deactivated successfully. May 8 00:39:21.827878 containerd[1464]: time="2025-05-08T00:39:21.827796548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:21.829319 containerd[1464]: time="2025-05-08T00:39:21.829280642Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:39:21.830988 containerd[1464]: time="2025-05-08T00:39:21.830951506Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:21.833373 containerd[1464]: time="2025-05-08T00:39:21.833343743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:21.834034 containerd[1464]: time="2025-05-08T00:39:21.833997660Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.743266109s" May 8 00:39:21.834034 containerd[1464]: time="2025-05-08T00:39:21.834027506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:39:21.858363 containerd[1464]: time="2025-05-08T00:39:21.858311456Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:39:22.434586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513184558.mount: Deactivated successfully. May 8 00:39:23.143100 containerd[1464]: time="2025-05-08T00:39:23.143017687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.144152 containerd[1464]: time="2025-05-08T00:39:23.144059932Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:39:23.145609 containerd[1464]: time="2025-05-08T00:39:23.145573892Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.148546 containerd[1464]: time="2025-05-08T00:39:23.148491675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.149548 containerd[1464]: time="2025-05-08T00:39:23.149513241Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.291152984s" May 8 00:39:23.149591 containerd[1464]: time="2025-05-08T00:39:23.149548918Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:39:23.172976 containerd[1464]: time="2025-05-08T00:39:23.172831800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:39:23.700302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316846433.mount: Deactivated successfully. May 8 00:39:23.712455 containerd[1464]: time="2025-05-08T00:39:23.712391289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.713152 containerd[1464]: time="2025-05-08T00:39:23.713057649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:39:23.714269 containerd[1464]: time="2025-05-08T00:39:23.714219659Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.718890 containerd[1464]: time="2025-05-08T00:39:23.718828283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.719619 containerd[1464]: time="2025-05-08T00:39:23.719573671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 546.695915ms" May 8 00:39:23.719619 containerd[1464]: time="2025-05-08T00:39:23.719604880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:39:23.745037 containerd[1464]: time="2025-05-08T00:39:23.744974717Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:39:24.306586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369956757.mount: Deactivated successfully. May 8 00:39:26.190443 containerd[1464]: time="2025-05-08T00:39:26.190356434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:26.191185 containerd[1464]: time="2025-05-08T00:39:26.191111189Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:39:26.192592 containerd[1464]: time="2025-05-08T00:39:26.192554867Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:26.195687 containerd[1464]: time="2025-05-08T00:39:26.195632250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:26.197194 containerd[1464]: time="2025-05-08T00:39:26.197158793Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.452134654s" May 8 00:39:26.197253 containerd[1464]: time="2025-05-08T00:39:26.197199870Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:39:29.252234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:29.262868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:29.282339 systemd[1]: Reloading requested from client PID 2125 ('systemctl') (unit session-7.scope)... May 8 00:39:29.282358 systemd[1]: Reloading... May 8 00:39:29.354741 zram_generator::config[2164]: No configuration found. May 8 00:39:29.565579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:29.645554 systemd[1]: Reloading finished in 362 ms. May 8 00:39:29.699097 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:39:29.699196 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:39:29.699468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:29.701194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:29.864111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:29.869220 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:39:29.910473 kubelet[2212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:29.910473 kubelet[2212]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:39:29.910473 kubelet[2212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:29.910937 kubelet[2212]: I0508 00:39:29.910530 2212 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:30.378285 kubelet[2212]: I0508 00:39:30.378235 2212 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:39:30.378285 kubelet[2212]: I0508 00:39:30.378271 2212 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:30.378502 kubelet[2212]: I0508 00:39:30.378485 2212 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:39:30.393939 kubelet[2212]: I0508 00:39:30.393844 2212 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:30.394217 kubelet[2212]: E0508 00:39:30.394179 2212 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.404691 kubelet[2212]: I0508 00:39:30.404650 2212 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:30.406456 kubelet[2212]: I0508 00:39:30.406413 2212 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:30.406637 kubelet[2212]: I0508 00:39:30.406447 2212 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:39:30.407073 kubelet[2212]: I0508 00:39:30.407050 2212 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:30.407073 kubelet[2212]: I0508 00:39:30.407065 2212 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:39:30.407244 kubelet[2212]: I0508 00:39:30.407223 2212 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:30.407894 kubelet[2212]: I0508 00:39:30.407874 2212 kubelet.go:400] "Attempting to sync node with API server" May 8 00:39:30.407894 kubelet[2212]: I0508 00:39:30.407891 2212 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:30.407943 kubelet[2212]: I0508 00:39:30.407913 2212 kubelet.go:312] "Adding apiserver pod source" May 8 00:39:30.407943 kubelet[2212]: I0508 00:39:30.407934 2212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:30.408561 kubelet[2212]: W0508 00:39:30.408379 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.408561 kubelet[2212]: E0508 00:39:30.408431 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.408561 kubelet[2212]: W0508 00:39:30.408467 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.408561 kubelet[2212]: E0508 00:39:30.408513 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.412724 kubelet[2212]: I0508 00:39:30.412698 2212 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:39:30.414396 kubelet[2212]: I0508 00:39:30.414377 2212 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:30.414495 kubelet[2212]: W0508 00:39:30.414469 2212 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:39:30.415361 kubelet[2212]: I0508 00:39:30.415343 2212 server.go:1264] "Started kubelet" May 8 00:39:30.415867 kubelet[2212]: I0508 00:39:30.415836 2212 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:30.416232 kubelet[2212]: I0508 00:39:30.416181 2212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:30.417241 kubelet[2212]: I0508 00:39:30.416884 2212 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:30.419675 kubelet[2212]: I0508 00:39:30.417435 2212 server.go:455] "Adding debug handlers to kubelet server" May 8 00:39:30.419675 kubelet[2212]: I0508 00:39:30.417978 2212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:30.420814 kubelet[2212]: E0508 00:39:30.420773 2212 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:30.420873 kubelet[2212]: I0508 00:39:30.420804 2212 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:39:30.420994 kubelet[2212]: I0508 00:39:30.420830 2212 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:30.421096 kubelet[2212]: I0508 00:39:30.421084 2212 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:30.421241 kubelet[2212]: W0508 00:39:30.421207 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.421344 kubelet[2212]: E0508 00:39:30.421320 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.423558 kubelet[2212]: E0508 00:39:30.423451 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d665abe4b3224 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:30.415309348 +0000 UTC m=+0.541940216,LastTimestamp:2025-05-08 00:39:30.415309348 +0000 UTC m=+0.541940216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:30.423743 kubelet[2212]: E0508 00:39:30.423716 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" May 8 00:39:30.424331 kubelet[2212]: I0508 00:39:30.424276 2212 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:30.424480 kubelet[2212]: E0508 00:39:30.424395 2212 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:30.425319 kubelet[2212]: I0508 00:39:30.425256 2212 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:30.425319 kubelet[2212]: I0508 00:39:30.425273 2212 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:30.437131 kubelet[2212]: I0508 00:39:30.437009 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:30.438536 kubelet[2212]: I0508 00:39:30.438496 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:30.438536 kubelet[2212]: I0508 00:39:30.438539 2212 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:39:30.438608 kubelet[2212]: I0508 00:39:30.438560 2212 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:39:30.438641 kubelet[2212]: E0508 00:39:30.438610 2212 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:30.443175 kubelet[2212]: W0508 00:39:30.443126 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.443228 kubelet[2212]: E0508 00:39:30.443179 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:30.444069 kubelet[2212]: I0508 00:39:30.444034 2212 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:39:30.444069 kubelet[2212]: I0508 00:39:30.444051 2212 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:39:30.444069 kubelet[2212]: I0508 00:39:30.444085 2212 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:30.523112 kubelet[2212]: I0508 00:39:30.523054 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:30.523595 kubelet[2212]: E0508 00:39:30.523557 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" May 8 00:39:30.539781 kubelet[2212]: E0508 00:39:30.539737 2212 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:39:30.624757 kubelet[2212]: E0508 00:39:30.624701 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" May 8 00:39:30.725015 kubelet[2212]: I0508 00:39:30.724944 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:30.725372 kubelet[2212]: E0508 00:39:30.725318 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" May 8 00:39:30.740465 kubelet[2212]: E0508 00:39:30.740407 2212 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:39:30.849332 kubelet[2212]: I0508 00:39:30.849249 2212 policy_none.go:49] "None policy: Start" May 8 00:39:30.850251 kubelet[2212]: I0508 00:39:30.850223 2212 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:39:30.850340 kubelet[2212]: I0508 00:39:30.850263 2212 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:30.858881 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:39:30.880553 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:39:30.884018 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:39:30.900988 kubelet[2212]: I0508 00:39:30.900932 2212 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:30.901256 kubelet[2212]: I0508 00:39:30.901196 2212 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:30.901521 kubelet[2212]: I0508 00:39:30.901360 2212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:30.903019 kubelet[2212]: E0508 00:39:30.902985 2212 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:39:31.025417 kubelet[2212]: E0508 00:39:31.025259 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" May 8 00:39:31.127008 kubelet[2212]: I0508 00:39:31.126942 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:31.127302 kubelet[2212]: E0508 00:39:31.127267 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" May 8 00:39:31.141557 kubelet[2212]: I0508 00:39:31.141486 2212 topology_manager.go:215] "Topology Admit Handler" podUID="27fb974e854a9371c9695a20952e2ca9" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:39:31.142891 kubelet[2212]: I0508 00:39:31.142861 2212 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:39:31.143710 kubelet[2212]: I0508 00:39:31.143688 2212 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:39:31.150164 systemd[1]: Created slice kubepods-burstable-pod27fb974e854a9371c9695a20952e2ca9.slice - libcontainer container kubepods-burstable-pod27fb974e854a9371c9695a20952e2ca9.slice. May 8 00:39:31.168413 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 8 00:39:31.176227 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 8 00:39:31.226172 kubelet[2212]: I0508 00:39:31.226139 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:31.226172 kubelet[2212]: I0508 00:39:31.226173 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:31.226286 kubelet[2212]: I0508 00:39:31.226191 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:31.226286 kubelet[2212]: I0508 00:39:31.226208 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27fb974e854a9371c9695a20952e2ca9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27fb974e854a9371c9695a20952e2ca9\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:31.226286 kubelet[2212]: I0508 00:39:31.226240 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27fb974e854a9371c9695a20952e2ca9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27fb974e854a9371c9695a20952e2ca9\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:31.226286 kubelet[2212]: I0508 00:39:31.226257 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:31.226400 kubelet[2212]: I0508 00:39:31.226337 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:31.226437 kubelet[2212]: I0508 00:39:31.226418 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:31.226460 kubelet[2212]: I0508 00:39:31.226441 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27fb974e854a9371c9695a20952e2ca9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27fb974e854a9371c9695a20952e2ca9\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:31.297015 kubelet[2212]: W0508 00:39:31.296800 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.297015 kubelet[2212]: E0508 00:39:31.296884 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.466197 kubelet[2212]: E0508 00:39:31.466149 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:31.466855 containerd[1464]: time="2025-05-08T00:39:31.466805474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27fb974e854a9371c9695a20952e2ca9,Namespace:kube-system,Attempt:0,}" May 8 00:39:31.467231 kubelet[2212]: W0508 00:39:31.467085 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.467231 kubelet[2212]: E0508 00:39:31.467144 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.474306 kubelet[2212]: E0508 00:39:31.474273 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:31.474755 containerd[1464]: time="2025-05-08T00:39:31.474701455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:39:31.478947 kubelet[2212]: E0508 00:39:31.478918 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:31.479345 containerd[1464]: time="2025-05-08T00:39:31.479304048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:39:31.593335 kubelet[2212]: W0508 00:39:31.593203 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.593335 kubelet[2212]: E0508 00:39:31.593267 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.714924 kubelet[2212]: W0508 00:39:31.714868 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.714924 kubelet[2212]: E0508 00:39:31.714919 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:31.826225 kubelet[2212]: E0508 00:39:31.826168 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" May 8 00:39:31.928863 kubelet[2212]: I0508 00:39:31.928721 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:31.929194 kubelet[2212]: E0508 00:39:31.929158 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" May 8 00:39:32.528151 kubelet[2212]: E0508 00:39:32.528089 2212 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:32.716173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709533996.mount: Deactivated successfully. May 8 00:39:32.722777 containerd[1464]: time="2025-05-08T00:39:32.722723497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:32.723740 containerd[1464]: time="2025-05-08T00:39:32.723701281Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:32.724650 containerd[1464]: time="2025-05-08T00:39:32.724620315Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:32.725506 containerd[1464]: time="2025-05-08T00:39:32.725454269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:32.726299 containerd[1464]: time="2025-05-08T00:39:32.726245003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:32.727082 containerd[1464]: time="2025-05-08T00:39:32.727040935Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:39:32.727949 containerd[1464]: time="2025-05-08T00:39:32.727914043Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:32.731922 containerd[1464]: time="2025-05-08T00:39:32.731879722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:32.732697 containerd[1464]: time="2025-05-08T00:39:32.732652221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.253260749s" May 8 00:39:32.733847 containerd[1464]: time="2025-05-08T00:39:32.733819971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.2669304s" May 8 00:39:32.735052 containerd[1464]: time="2025-05-08T00:39:32.735020052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.260236353s" May 8 00:39:32.887464 containerd[1464]: time="2025-05-08T00:39:32.887118816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:32.887464 containerd[1464]: time="2025-05-08T00:39:32.887270420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:32.887628 containerd[1464]: time="2025-05-08T00:39:32.887507916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:32.888171 containerd[1464]: time="2025-05-08T00:39:32.887633241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:32.888171 containerd[1464]: time="2025-05-08T00:39:32.887633762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:32.888171 containerd[1464]: time="2025-05-08T00:39:32.887752655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:32.888171 containerd[1464]: time="2025-05-08T00:39:32.888017802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:32.888918 containerd[1464]: time="2025-05-08T00:39:32.888747030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:32.889754 containerd[1464]: time="2025-05-08T00:39:32.889621500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:32.889804 containerd[1464]: time="2025-05-08T00:39:32.889743970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:32.889804 containerd[1464]: time="2025-05-08T00:39:32.889766001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:32.890060 containerd[1464]: time="2025-05-08T00:39:32.889955687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:32.918868 systemd[1]: Started cri-containerd-2bc4f5ad9e1341b7601c543b2484b2b9fef74ef543e0039aeafc64198396b72e.scope - libcontainer container 2bc4f5ad9e1341b7601c543b2484b2b9fef74ef543e0039aeafc64198396b72e. May 8 00:39:32.920931 systemd[1]: Started cri-containerd-4490ab3f188e4c45123a09ee928e11fe72b327780bacf7809b8f93b089d88c08.scope - libcontainer container 4490ab3f188e4c45123a09ee928e11fe72b327780bacf7809b8f93b089d88c08. May 8 00:39:32.925202 systemd[1]: Started cri-containerd-62e0bb8b9a6da49029a9745adbc19d157e92f7ef1c8b4a7ee6ad705159592c2f.scope - libcontainer container 62e0bb8b9a6da49029a9745adbc19d157e92f7ef1c8b4a7ee6ad705159592c2f. May 8 00:39:32.966427 containerd[1464]: time="2025-05-08T00:39:32.966371231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bc4f5ad9e1341b7601c543b2484b2b9fef74ef543e0039aeafc64198396b72e\"" May 8 00:39:32.969225 kubelet[2212]: E0508 00:39:32.968860 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:32.971246 containerd[1464]: time="2025-05-08T00:39:32.971205248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27fb974e854a9371c9695a20952e2ca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4490ab3f188e4c45123a09ee928e11fe72b327780bacf7809b8f93b089d88c08\"" May 8 00:39:32.971361 containerd[1464]: time="2025-05-08T00:39:32.971300737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"62e0bb8b9a6da49029a9745adbc19d157e92f7ef1c8b4a7ee6ad705159592c2f\"" May 8 00:39:32.972113 kubelet[2212]: E0508 00:39:32.972090 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:32.972414 kubelet[2212]: E0508 00:39:32.972132 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:32.974554 containerd[1464]: time="2025-05-08T00:39:32.974404970Z" level=info msg="CreateContainer within sandbox \"4490ab3f188e4c45123a09ee928e11fe72b327780bacf7809b8f93b089d88c08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:39:32.974554 containerd[1464]: time="2025-05-08T00:39:32.974429676Z" level=info msg="CreateContainer within sandbox \"2bc4f5ad9e1341b7601c543b2484b2b9fef74ef543e0039aeafc64198396b72e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:39:32.974554 containerd[1464]: time="2025-05-08T00:39:32.974402555Z" level=info msg="CreateContainer within sandbox \"62e0bb8b9a6da49029a9745adbc19d157e92f7ef1c8b4a7ee6ad705159592c2f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:39:33.008049 containerd[1464]: time="2025-05-08T00:39:33.007997049Z" level=info msg="CreateContainer within sandbox \"2bc4f5ad9e1341b7601c543b2484b2b9fef74ef543e0039aeafc64198396b72e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bbbee5fc90369eb06222337120509ea554a01ad30d6789319aef6dd59af369b7\"" May 8 00:39:33.008706 containerd[1464]: time="2025-05-08T00:39:33.008675151Z" level=info msg="StartContainer for \"bbbee5fc90369eb06222337120509ea554a01ad30d6789319aef6dd59af369b7\"" May 8 00:39:33.009470 containerd[1464]: time="2025-05-08T00:39:33.009423805Z" level=info msg="CreateContainer within sandbox \"62e0bb8b9a6da49029a9745adbc19d157e92f7ef1c8b4a7ee6ad705159592c2f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"106c344154c4a28d076582fe5b52a80dc81910fd32fbe3872b01f0633e8668c9\"" May 8 00:39:33.009758 containerd[1464]: time="2025-05-08T00:39:33.009728837Z" level=info msg="StartContainer for \"106c344154c4a28d076582fe5b52a80dc81910fd32fbe3872b01f0633e8668c9\"" May 8 00:39:33.013715 containerd[1464]: time="2025-05-08T00:39:33.013677794Z" level=info msg="CreateContainer within sandbox \"4490ab3f188e4c45123a09ee928e11fe72b327780bacf7809b8f93b089d88c08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9c72ad944036399ccb9cbc3ad4d1d533eefc5be1ae3aa644a4765b2c6649746\"" May 8 00:39:33.014692 containerd[1464]: time="2025-05-08T00:39:33.014159528Z" level=info msg="StartContainer for \"f9c72ad944036399ccb9cbc3ad4d1d533eefc5be1ae3aa644a4765b2c6649746\"" May 8 00:39:33.038818 systemd[1]: Started cri-containerd-bbbee5fc90369eb06222337120509ea554a01ad30d6789319aef6dd59af369b7.scope - libcontainer container bbbee5fc90369eb06222337120509ea554a01ad30d6789319aef6dd59af369b7. May 8 00:39:33.043004 systemd[1]: Started cri-containerd-106c344154c4a28d076582fe5b52a80dc81910fd32fbe3872b01f0633e8668c9.scope - libcontainer container 106c344154c4a28d076582fe5b52a80dc81910fd32fbe3872b01f0633e8668c9. May 8 00:39:33.044639 systemd[1]: Started cri-containerd-f9c72ad944036399ccb9cbc3ad4d1d533eefc5be1ae3aa644a4765b2c6649746.scope - libcontainer container f9c72ad944036399ccb9cbc3ad4d1d533eefc5be1ae3aa644a4765b2c6649746. May 8 00:39:33.087392 containerd[1464]: time="2025-05-08T00:39:33.087251588Z" level=info msg="StartContainer for \"106c344154c4a28d076582fe5b52a80dc81910fd32fbe3872b01f0633e8668c9\" returns successfully" May 8 00:39:33.093057 containerd[1464]: time="2025-05-08T00:39:33.092990342Z" level=info msg="StartContainer for \"bbbee5fc90369eb06222337120509ea554a01ad30d6789319aef6dd59af369b7\" returns successfully" May 8 00:39:33.102514 containerd[1464]: time="2025-05-08T00:39:33.102463200Z" level=info msg="StartContainer for \"f9c72ad944036399ccb9cbc3ad4d1d533eefc5be1ae3aa644a4765b2c6649746\" returns successfully" May 8 00:39:33.113442 kubelet[2212]: W0508 00:39:33.113380 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:33.113442 kubelet[2212]: E0508 00:39:33.113425 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused May 8 00:39:33.452544 kubelet[2212]: E0508 00:39:33.452451 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:33.457332 kubelet[2212]: E0508 00:39:33.457291 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:33.458025 kubelet[2212]: E0508 00:39:33.457781 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:33.531636 kubelet[2212]: I0508 00:39:33.531588 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:33.948826 kubelet[2212]: E0508 00:39:33.948783 2212 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:39:34.021088 kubelet[2212]: I0508 00:39:34.021027 2212 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:39:34.067622 kubelet[2212]: E0508 00:39:34.067525 2212 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183d665abe4b3224 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:30.415309348 +0000 UTC m=+0.541940216,LastTimestamp:2025-05-08 00:39:30.415309348 +0000 UTC m=+0.541940216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:34.120969 kubelet[2212]: E0508 00:39:34.120829 2212 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183d665abed5b8ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:30.424387786 +0000 UTC m=+0.551018654,LastTimestamp:2025-05-08 00:39:30.424387786 +0000 UTC m=+0.551018654,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:34.410008 kubelet[2212]: I0508 00:39:34.409954 2212 apiserver.go:52] "Watching apiserver" May 8 00:39:34.421946 kubelet[2212]: I0508 00:39:34.421884 2212 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:34.482565 kubelet[2212]: E0508 00:39:34.482527 2212 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:39:34.483070 kubelet[2212]: E0508 00:39:34.483035 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:35.337638 kubelet[2212]: E0508 00:39:35.337523 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:35.462789 kubelet[2212]: E0508 00:39:35.462742 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.304731 systemd[1]: Reloading requested from client PID 2487 ('systemctl') (unit session-7.scope)... May 8 00:39:36.304749 systemd[1]: Reloading... May 8 00:39:36.376729 zram_generator::config[2532]: No configuration found. May 8 00:39:36.482201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:36.576346 systemd[1]: Reloading finished in 271 ms. May 8 00:39:36.625251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:36.639213 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:39:36.639534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:36.639595 systemd[1]: kubelet.service: Consumed 1.087s CPU time, 117.3M memory peak, 0B memory swap peak. May 8 00:39:36.653051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:36.825938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:36.830799 (kubelet)[2571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:39:36.880544 kubelet[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:36.880544 kubelet[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:39:36.880544 kubelet[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:36.883034 kubelet[2571]: I0508 00:39:36.880620 2571 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:36.886948 kubelet[2571]: I0508 00:39:36.886905 2571 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:39:36.886948 kubelet[2571]: I0508 00:39:36.886942 2571 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:36.887190 kubelet[2571]: I0508 00:39:36.887163 2571 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:39:36.888488 kubelet[2571]: I0508 00:39:36.888462 2571 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:39:36.889703 kubelet[2571]: I0508 00:39:36.889601 2571 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:36.901174 kubelet[2571]: I0508 00:39:36.901149 2571 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:36.901557 kubelet[2571]: I0508 00:39:36.901520 2571 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:36.901798 kubelet[2571]: I0508 00:39:36.901612 2571 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.901930 2571 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.901944 2571 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.902023 2571 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.902155 2571 kubelet.go:400] "Attempting to sync node with API server" May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.902168 2571 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.902194 2571 kubelet.go:312] "Adding apiserver pod source" May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.902218 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:36.902923 kubelet[2571]: I0508 00:39:36.902906 2571 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:39:36.903252 kubelet[2571]: I0508 00:39:36.903139 2571 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:36.903618 kubelet[2571]: I0508 00:39:36.903590 2571 server.go:1264] "Started kubelet" May 8 00:39:36.906674 kubelet[2571]: I0508 00:39:36.904364 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:36.906674 kubelet[2571]: I0508 00:39:36.904687 2571 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:36.906674 kubelet[2571]: I0508 00:39:36.904721 2571 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:36.906674 kubelet[2571]: I0508 00:39:36.905722 2571 server.go:455] "Adding debug handlers to kubelet server" May 8 00:39:36.911175 kubelet[2571]: I0508 00:39:36.909867 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:36.913150 kubelet[2571]: I0508 00:39:36.913120 2571 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:39:36.913235 kubelet[2571]: I0508 00:39:36.913215 2571 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:36.913395 kubelet[2571]: I0508 00:39:36.913375 2571 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:36.913512 kubelet[2571]: E0508 00:39:36.913495 2571 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:36.914649 kubelet[2571]: I0508 00:39:36.914617 2571 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:36.916297 kubelet[2571]: I0508 00:39:36.916164 2571 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:36.918454 kubelet[2571]: I0508 00:39:36.918239 2571 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:36.925613 kubelet[2571]: I0508 00:39:36.925571 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:36.927042 kubelet[2571]: I0508 00:39:36.927015 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:36.927105 kubelet[2571]: I0508 00:39:36.927059 2571 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:39:36.927105 kubelet[2571]: I0508 00:39:36.927083 2571 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:39:36.927149 kubelet[2571]: E0508 00:39:36.927131 2571 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:36.950554 kubelet[2571]: I0508 00:39:36.950525 2571 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:39:36.950554 kubelet[2571]: I0508 00:39:36.950543 2571 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:39:36.950643 kubelet[2571]: I0508 00:39:36.950565 2571 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:36.950755 kubelet[2571]: I0508 00:39:36.950735 2571 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:39:36.950788 kubelet[2571]: I0508 00:39:36.950750 2571 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:39:36.950788 kubelet[2571]: I0508 00:39:36.950772 2571 policy_none.go:49] "None policy: Start" May 8 00:39:36.951697 kubelet[2571]: I0508 00:39:36.951336 2571 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:39:36.951697 kubelet[2571]: I0508 00:39:36.951366 2571 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:36.951697 kubelet[2571]: I0508 00:39:36.951507 2571 state_mem.go:75] "Updated machine memory state" May 8 00:39:36.956162 kubelet[2571]: I0508 00:39:36.956125 2571 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:36.956372 kubelet[2571]: I0508 00:39:36.956311 2571 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:36.957183 kubelet[2571]: I0508 00:39:36.956499 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:37.028110 kubelet[2571]: I0508 00:39:37.028058 2571 topology_manager.go:215] "Topology Admit Handler" podUID="27fb974e854a9371c9695a20952e2ca9" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:39:37.028252 kubelet[2571]: I0508 00:39:37.028176 2571 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:39:37.028252 kubelet[2571]: I0508 00:39:37.028234 2571 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:39:37.035239 kubelet[2571]: E0508 00:39:37.035187 2571 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:39:37.065509 kubelet[2571]: I0508 00:39:37.065485 2571 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:37.071760 kubelet[2571]: I0508 00:39:37.071727 2571 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:39:37.071853 kubelet[2571]: I0508 00:39:37.071833 2571 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:39:37.215161 kubelet[2571]: I0508 00:39:37.215133 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27fb974e854a9371c9695a20952e2ca9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27fb974e854a9371c9695a20952e2ca9\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:37.215161 kubelet[2571]: I0508 00:39:37.215164 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:37.215344 kubelet[2571]: I0508 00:39:37.215183 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:37.215344 kubelet[2571]: I0508 00:39:37.215196 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27fb974e854a9371c9695a20952e2ca9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27fb974e854a9371c9695a20952e2ca9\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:37.215344 kubelet[2571]: I0508 00:39:37.215212 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27fb974e854a9371c9695a20952e2ca9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27fb974e854a9371c9695a20952e2ca9\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:37.215344 kubelet[2571]: I0508 00:39:37.215228 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:37.215344 kubelet[2571]: I0508 00:39:37.215241 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:37.215464 kubelet[2571]: I0508 00:39:37.215256 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:37.215464 kubelet[2571]: I0508 00:39:37.215271 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:37.336684 kubelet[2571]: E0508 00:39:37.336535 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.336684 kubelet[2571]: E0508 00:39:37.336680 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.336870 kubelet[2571]: E0508 00:39:37.336849 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.903907 kubelet[2571]: I0508 00:39:37.903664 2571 apiserver.go:52] "Watching apiserver" May 8 00:39:37.913744 kubelet[2571]: I0508 00:39:37.913713 2571 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:37.939142 kubelet[2571]: E0508 00:39:37.939104 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.939313 kubelet[2571]: E0508 00:39:37.939113 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.939376 kubelet[2571]: E0508 00:39:37.939340 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.993821 kubelet[2571]: I0508 00:39:37.993735 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.993704321 podStartE2EDuration="993.704321ms" podCreationTimestamp="2025-05-08 00:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:37.984205914 +0000 UTC m=+1.148468800" watchObservedRunningTime="2025-05-08 00:39:37.993704321 +0000 UTC m=+1.157967197" May 8 00:39:38.010890 kubelet[2571]: I0508 00:39:38.010816 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.010782363 podStartE2EDuration="1.010782363s" podCreationTimestamp="2025-05-08 00:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:37.99437017 +0000 UTC m=+1.158633046" watchObservedRunningTime="2025-05-08 00:39:38.010782363 +0000 UTC m=+1.175045239" May 8 00:39:38.028343 kubelet[2571]: I0508 00:39:38.028008 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.027980721 podStartE2EDuration="3.027980721s" podCreationTimestamp="2025-05-08 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:38.011393375 +0000 UTC m=+1.175656251" watchObservedRunningTime="2025-05-08 00:39:38.027980721 +0000 UTC m=+1.192243597" May 8 00:39:38.940732 kubelet[2571]: E0508 00:39:38.940674 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:41.603130 sudo[1652]: pam_unix(sudo:session): session closed for user root May 8 00:39:41.605128 sshd[1649]: pam_unix(sshd:session): session closed for user core May 8 00:39:41.609425 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:35484.service: Deactivated successfully. May 8 00:39:41.611647 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:39:41.611931 systemd[1]: session-7.scope: Consumed 5.704s CPU time, 194.6M memory peak, 0B memory swap peak. May 8 00:39:41.612413 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. May 8 00:39:41.613478 systemd-logind[1453]: Removed session 7. May 8 00:39:42.834245 kubelet[2571]: E0508 00:39:42.834197 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:46.249300 kubelet[2571]: E0508 00:39:46.249262 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:46.951685 kubelet[2571]: E0508 00:39:46.951616 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:46.997251 kubelet[2571]: E0508 00:39:46.997207 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:47.952324 kubelet[2571]: E0508 00:39:47.952279 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:48.954401 kubelet[2571]: E0508 00:39:48.954351 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:51.374190 update_engine[1455]: I20250508 00:39:51.374099 1455 update_attempter.cc:509] Updating boot flags... May 8 00:39:51.451702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2670) May 8 00:39:51.522939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2674) May 8 00:39:51.606708 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2674) May 8 00:39:51.724160 kubelet[2571]: I0508 00:39:51.724131 2571 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:39:51.724679 containerd[1464]: time="2025-05-08T00:39:51.724528529Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:39:51.724958 kubelet[2571]: I0508 00:39:51.724775 2571 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:39:52.837834 kubelet[2571]: E0508 00:39:52.837793 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:52.923081 kubelet[2571]: I0508 00:39:52.923011 2571 topology_manager.go:215] "Topology Admit Handler" podUID="31f214bd-43a7-4eeb-a251-fdea88c6609a" podNamespace="kube-system" podName="kube-proxy-6k6fp" May 8 00:39:52.938855 systemd[1]: Created slice kubepods-besteffort-pod31f214bd_43a7_4eeb_a251_fdea88c6609a.slice - libcontainer container kubepods-besteffort-pod31f214bd_43a7_4eeb_a251_fdea88c6609a.slice. May 8 00:39:53.003210 kubelet[2571]: I0508 00:39:53.003163 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31f214bd-43a7-4eeb-a251-fdea88c6609a-kube-proxy\") pod \"kube-proxy-6k6fp\" (UID: \"31f214bd-43a7-4eeb-a251-fdea88c6609a\") " pod="kube-system/kube-proxy-6k6fp" May 8 00:39:53.003210 kubelet[2571]: I0508 00:39:53.003203 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f214bd-43a7-4eeb-a251-fdea88c6609a-lib-modules\") pod \"kube-proxy-6k6fp\" (UID: \"31f214bd-43a7-4eeb-a251-fdea88c6609a\") " pod="kube-system/kube-proxy-6k6fp" May 8 00:39:53.003210 kubelet[2571]: I0508 00:39:53.003224 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f214bd-43a7-4eeb-a251-fdea88c6609a-xtables-lock\") pod \"kube-proxy-6k6fp\" (UID: \"31f214bd-43a7-4eeb-a251-fdea88c6609a\") " pod="kube-system/kube-proxy-6k6fp" May 8 00:39:53.003447 kubelet[2571]: I0508 00:39:53.003246 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvksq\" (UniqueName: \"kubernetes.io/projected/31f214bd-43a7-4eeb-a251-fdea88c6609a-kube-api-access-xvksq\") pod \"kube-proxy-6k6fp\" (UID: \"31f214bd-43a7-4eeb-a251-fdea88c6609a\") " pod="kube-system/kube-proxy-6k6fp" May 8 00:39:53.391537 kubelet[2571]: I0508 00:39:53.391493 2571 topology_manager.go:215] "Topology Admit Handler" podUID="5f8dcebb-3026-4e4c-85a5-aa45ed7196b7" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-kn7qb" May 8 00:39:53.398082 systemd[1]: Created slice kubepods-besteffort-pod5f8dcebb_3026_4e4c_85a5_aa45ed7196b7.slice - libcontainer container kubepods-besteffort-pod5f8dcebb_3026_4e4c_85a5_aa45ed7196b7.slice. May 8 00:39:53.404978 kubelet[2571]: I0508 00:39:53.404942 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb9b4\" (UniqueName: \"kubernetes.io/projected/5f8dcebb-3026-4e4c-85a5-aa45ed7196b7-kube-api-access-bb9b4\") pod \"tigera-operator-797db67f8-kn7qb\" (UID: \"5f8dcebb-3026-4e4c-85a5-aa45ed7196b7\") " pod="tigera-operator/tigera-operator-797db67f8-kn7qb" May 8 00:39:53.405047 kubelet[2571]: I0508 00:39:53.404983 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f8dcebb-3026-4e4c-85a5-aa45ed7196b7-var-lib-calico\") pod \"tigera-operator-797db67f8-kn7qb\" (UID: \"5f8dcebb-3026-4e4c-85a5-aa45ed7196b7\") " pod="tigera-operator/tigera-operator-797db67f8-kn7qb" May 8 00:39:53.550747 kubelet[2571]: E0508 00:39:53.550291 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:53.551339 containerd[1464]: time="2025-05-08T00:39:53.551275210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k6fp,Uid:31f214bd-43a7-4eeb-a251-fdea88c6609a,Namespace:kube-system,Attempt:0,}" May 8 00:39:54.001449 containerd[1464]: time="2025-05-08T00:39:54.001396728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kn7qb,Uid:5f8dcebb-3026-4e4c-85a5-aa45ed7196b7,Namespace:tigera-operator,Attempt:0,}" May 8 00:39:54.164495 containerd[1464]: time="2025-05-08T00:39:54.164392393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:54.164495 containerd[1464]: time="2025-05-08T00:39:54.164459920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:54.164495 containerd[1464]: time="2025-05-08T00:39:54.164474678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:54.164737 containerd[1464]: time="2025-05-08T00:39:54.164590658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:54.188801 systemd[1]: Started cri-containerd-6f87378a2c94633bc2d736b5a48175f199bad966ae2e92805a9cd1bbcf0358a8.scope - libcontainer container 6f87378a2c94633bc2d736b5a48175f199bad966ae2e92805a9cd1bbcf0358a8. May 8 00:39:54.212270 containerd[1464]: time="2025-05-08T00:39:54.212221308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k6fp,Uid:31f214bd-43a7-4eeb-a251-fdea88c6609a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f87378a2c94633bc2d736b5a48175f199bad966ae2e92805a9cd1bbcf0358a8\"" May 8 00:39:54.213203 kubelet[2571]: E0508 00:39:54.213171 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:54.215530 containerd[1464]: time="2025-05-08T00:39:54.215446566Z" level=info msg="CreateContainer within sandbox \"6f87378a2c94633bc2d736b5a48175f199bad966ae2e92805a9cd1bbcf0358a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:39:54.775370 containerd[1464]: time="2025-05-08T00:39:54.774640447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:54.775370 containerd[1464]: time="2025-05-08T00:39:54.775339258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:54.775370 containerd[1464]: time="2025-05-08T00:39:54.775354387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:54.775894 containerd[1464]: time="2025-05-08T00:39:54.775446211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:54.796800 systemd[1]: Started cri-containerd-246d0d4eb6da8460dcec0ee7491a632f596fc1a1ff4053bb519d4554d172d170.scope - libcontainer container 246d0d4eb6da8460dcec0ee7491a632f596fc1a1ff4053bb519d4554d172d170. May 8 00:39:54.839097 containerd[1464]: time="2025-05-08T00:39:54.839050594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kn7qb,Uid:5f8dcebb-3026-4e4c-85a5-aa45ed7196b7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"246d0d4eb6da8460dcec0ee7491a632f596fc1a1ff4053bb519d4554d172d170\"" May 8 00:39:54.840908 containerd[1464]: time="2025-05-08T00:39:54.840866258Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:39:55.158748 containerd[1464]: time="2025-05-08T00:39:55.158629740Z" level=info msg="CreateContainer within sandbox \"6f87378a2c94633bc2d736b5a48175f199bad966ae2e92805a9cd1bbcf0358a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7723f36589a8ecc33e1350ae65aa4cc531e0072645a7c59f57336d00c7c3910\"" May 8 00:39:55.159262 containerd[1464]: time="2025-05-08T00:39:55.159221969Z" level=info msg="StartContainer for \"e7723f36589a8ecc33e1350ae65aa4cc531e0072645a7c59f57336d00c7c3910\"" May 8 00:39:55.187801 systemd[1]: Started cri-containerd-e7723f36589a8ecc33e1350ae65aa4cc531e0072645a7c59f57336d00c7c3910.scope - libcontainer container e7723f36589a8ecc33e1350ae65aa4cc531e0072645a7c59f57336d00c7c3910. May 8 00:39:55.222033 containerd[1464]: time="2025-05-08T00:39:55.221818799Z" level=info msg="StartContainer for \"e7723f36589a8ecc33e1350ae65aa4cc531e0072645a7c59f57336d00c7c3910\" returns successfully" May 8 00:39:55.343607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount612820781.mount: Deactivated successfully. May 8 00:39:55.967393 kubelet[2571]: E0508 00:39:55.967338 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:55.976634 kubelet[2571]: I0508 00:39:55.976531 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6k6fp" podStartSLOduration=3.976482835 podStartE2EDuration="3.976482835s" podCreationTimestamp="2025-05-08 00:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:55.975991857 +0000 UTC m=+19.140254743" watchObservedRunningTime="2025-05-08 00:39:55.976482835 +0000 UTC m=+19.140745711" May 8 00:39:56.095894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611769601.mount: Deactivated successfully. May 8 00:39:56.969112 kubelet[2571]: E0508 00:39:56.969072 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:58.145119 containerd[1464]: time="2025-05-08T00:39:58.145048210Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:58.164745 containerd[1464]: time="2025-05-08T00:39:58.164683950Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:39:58.186552 containerd[1464]: time="2025-05-08T00:39:58.186506898Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:58.191487 containerd[1464]: time="2025-05-08T00:39:58.191454376Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:58.193305 containerd[1464]: time="2025-05-08T00:39:58.193253842Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.352341337s" May 8 00:39:58.193369 containerd[1464]: time="2025-05-08T00:39:58.193298958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:39:58.196402 containerd[1464]: time="2025-05-08T00:39:58.196362270Z" level=info msg="CreateContainer within sandbox \"246d0d4eb6da8460dcec0ee7491a632f596fc1a1ff4053bb519d4554d172d170\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:39:58.212037 containerd[1464]: time="2025-05-08T00:39:58.211993942Z" level=info msg="CreateContainer within sandbox \"246d0d4eb6da8460dcec0ee7491a632f596fc1a1ff4053bb519d4554d172d170\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d1ebcfc68303273236e8a389c332ad0cc24599a5267ef202c1feb360c10dd22f\"" May 8 00:39:58.212472 containerd[1464]: time="2025-05-08T00:39:58.212442378Z" level=info msg="StartContainer for \"d1ebcfc68303273236e8a389c332ad0cc24599a5267ef202c1feb360c10dd22f\"" May 8 00:39:58.239871 systemd[1]: Started cri-containerd-d1ebcfc68303273236e8a389c332ad0cc24599a5267ef202c1feb360c10dd22f.scope - libcontainer container d1ebcfc68303273236e8a389c332ad0cc24599a5267ef202c1feb360c10dd22f. May 8 00:39:58.269451 containerd[1464]: time="2025-05-08T00:39:58.269400179Z" level=info msg="StartContainer for \"d1ebcfc68303273236e8a389c332ad0cc24599a5267ef202c1feb360c10dd22f\" returns successfully" May 8 00:39:58.983268 kubelet[2571]: I0508 00:39:58.983181 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-kn7qb" podStartSLOduration=3.6285164290000003 podStartE2EDuration="6.983161186s" podCreationTimestamp="2025-05-08 00:39:52 +0000 UTC" firstStartedPulling="2025-05-08 00:39:54.840429883 +0000 UTC m=+18.004692759" lastFinishedPulling="2025-05-08 00:39:58.19507464 +0000 UTC m=+21.359337516" observedRunningTime="2025-05-08 00:39:58.98287206 +0000 UTC m=+22.147134946" watchObservedRunningTime="2025-05-08 00:39:58.983161186 +0000 UTC m=+22.147424062" May 8 00:40:01.261272 kubelet[2571]: I0508 00:40:01.261209 2571 topology_manager.go:215] "Topology Admit Handler" podUID="4f5677bd-881d-4df1-822e-155dcab80722" podNamespace="calico-system" podName="calico-typha-6f5cf95bdf-fpwgm" May 8 00:40:01.280205 systemd[1]: Created slice kubepods-besteffort-pod4f5677bd_881d_4df1_822e_155dcab80722.slice - libcontainer container kubepods-besteffort-pod4f5677bd_881d_4df1_822e_155dcab80722.slice. May 8 00:40:01.354637 kubelet[2571]: I0508 00:40:01.354594 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4f5677bd-881d-4df1-822e-155dcab80722-typha-certs\") pod \"calico-typha-6f5cf95bdf-fpwgm\" (UID: \"4f5677bd-881d-4df1-822e-155dcab80722\") " pod="calico-system/calico-typha-6f5cf95bdf-fpwgm" May 8 00:40:01.354637 kubelet[2571]: I0508 00:40:01.354632 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f5677bd-881d-4df1-822e-155dcab80722-tigera-ca-bundle\") pod \"calico-typha-6f5cf95bdf-fpwgm\" (UID: \"4f5677bd-881d-4df1-822e-155dcab80722\") " pod="calico-system/calico-typha-6f5cf95bdf-fpwgm" May 8 00:40:01.354637 kubelet[2571]: I0508 00:40:01.354651 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hz4\" (UniqueName: \"kubernetes.io/projected/4f5677bd-881d-4df1-822e-155dcab80722-kube-api-access-99hz4\") pod \"calico-typha-6f5cf95bdf-fpwgm\" (UID: \"4f5677bd-881d-4df1-822e-155dcab80722\") " pod="calico-system/calico-typha-6f5cf95bdf-fpwgm" May 8 00:40:01.566223 kubelet[2571]: I0508 00:40:01.565176 2571 topology_manager.go:215] "Topology Admit Handler" podUID="7a677f72-c97f-4200-b025-a92d332a0441" podNamespace="calico-system" podName="calico-node-6ln6s" May 8 00:40:01.576185 systemd[1]: Created slice kubepods-besteffort-pod7a677f72_c97f_4200_b025_a92d332a0441.slice - libcontainer container kubepods-besteffort-pod7a677f72_c97f_4200_b025_a92d332a0441.slice. May 8 00:40:01.584398 kubelet[2571]: E0508 00:40:01.584348 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.584994 containerd[1464]: time="2025-05-08T00:40:01.584953469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f5cf95bdf-fpwgm,Uid:4f5677bd-881d-4df1-822e-155dcab80722,Namespace:calico-system,Attempt:0,}" May 8 00:40:01.626246 containerd[1464]: time="2025-05-08T00:40:01.625719076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:01.626246 containerd[1464]: time="2025-05-08T00:40:01.625860432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:01.626246 containerd[1464]: time="2025-05-08T00:40:01.625882033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:01.626246 containerd[1464]: time="2025-05-08T00:40:01.626003863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:01.656120 kubelet[2571]: I0508 00:40:01.655922 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-lib-modules\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656120 kubelet[2571]: I0508 00:40:01.655958 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a677f72-c97f-4200-b025-a92d332a0441-tigera-ca-bundle\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656120 kubelet[2571]: I0508 00:40:01.655982 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-xtables-lock\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656120 kubelet[2571]: I0508 00:40:01.655999 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-cni-bin-dir\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656120 kubelet[2571]: I0508 00:40:01.656014 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-cni-log-dir\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656414 kubelet[2571]: I0508 00:40:01.656029 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-var-run-calico\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656414 kubelet[2571]: I0508 00:40:01.656047 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-cni-net-dir\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656414 kubelet[2571]: I0508 00:40:01.656063 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-var-lib-calico\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656414 kubelet[2571]: I0508 00:40:01.656080 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7gq\" (UniqueName: \"kubernetes.io/projected/7a677f72-c97f-4200-b025-a92d332a0441-kube-api-access-bf7gq\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656414 kubelet[2571]: I0508 00:40:01.656101 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a677f72-c97f-4200-b025-a92d332a0441-node-certs\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656543 kubelet[2571]: I0508 00:40:01.656116 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-flexvol-driver-host\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.656543 kubelet[2571]: I0508 00:40:01.656131 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a677f72-c97f-4200-b025-a92d332a0441-policysync\") pod \"calico-node-6ln6s\" (UID: \"7a677f72-c97f-4200-b025-a92d332a0441\") " pod="calico-system/calico-node-6ln6s" May 8 00:40:01.659955 systemd[1]: Started cri-containerd-718164f515abaf7ec3caf055c0a6429d687b3ba799759211d9a3ae7e663e72b2.scope - libcontainer container 718164f515abaf7ec3caf055c0a6429d687b3ba799759211d9a3ae7e663e72b2. May 8 00:40:01.680538 kubelet[2571]: I0508 00:40:01.680099 2571 topology_manager.go:215] "Topology Admit Handler" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" podNamespace="calico-system" podName="csi-node-driver-gzqv5" May 8 00:40:01.680538 kubelet[2571]: E0508 00:40:01.680467 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:01.723219 containerd[1464]: time="2025-05-08T00:40:01.723174942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f5cf95bdf-fpwgm,Uid:4f5677bd-881d-4df1-822e-155dcab80722,Namespace:calico-system,Attempt:0,} returns sandbox id \"718164f515abaf7ec3caf055c0a6429d687b3ba799759211d9a3ae7e663e72b2\"" May 8 00:40:01.724066 kubelet[2571]: E0508 00:40:01.724028 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.725111 containerd[1464]: time="2025-05-08T00:40:01.725090724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:40:01.756929 kubelet[2571]: I0508 00:40:01.756868 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/94d037e0-3318-4a96-bf33-490f8e3dd35d-varrun\") pod \"csi-node-driver-gzqv5\" (UID: \"94d037e0-3318-4a96-bf33-490f8e3dd35d\") " pod="calico-system/csi-node-driver-gzqv5" May 8 00:40:01.756929 kubelet[2571]: I0508 00:40:01.756922 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d037e0-3318-4a96-bf33-490f8e3dd35d-registration-dir\") pod \"csi-node-driver-gzqv5\" (UID: \"94d037e0-3318-4a96-bf33-490f8e3dd35d\") " pod="calico-system/csi-node-driver-gzqv5" May 8 00:40:01.756929 kubelet[2571]: I0508 00:40:01.756959 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82zj\" (UniqueName: \"kubernetes.io/projected/94d037e0-3318-4a96-bf33-490f8e3dd35d-kube-api-access-l82zj\") pod \"csi-node-driver-gzqv5\" (UID: \"94d037e0-3318-4a96-bf33-490f8e3dd35d\") " pod="calico-system/csi-node-driver-gzqv5" May 8 00:40:01.757262 kubelet[2571]: I0508 00:40:01.756981 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d037e0-3318-4a96-bf33-490f8e3dd35d-socket-dir\") pod \"csi-node-driver-gzqv5\" (UID: \"94d037e0-3318-4a96-bf33-490f8e3dd35d\") " pod="calico-system/csi-node-driver-gzqv5" May 8 00:40:01.757262 kubelet[2571]: I0508 00:40:01.757010 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d037e0-3318-4a96-bf33-490f8e3dd35d-kubelet-dir\") pod \"csi-node-driver-gzqv5\" (UID: \"94d037e0-3318-4a96-bf33-490f8e3dd35d\") " pod="calico-system/csi-node-driver-gzqv5" May 8 00:40:01.780004 kubelet[2571]: E0508 00:40:01.779950 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.780004 kubelet[2571]: W0508 00:40:01.779985 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.780182 kubelet[2571]: E0508 00:40:01.780033 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.858548 kubelet[2571]: E0508 00:40:01.858404 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.858548 kubelet[2571]: W0508 00:40:01.858430 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.858548 kubelet[2571]: E0508 00:40:01.858453 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.859427 kubelet[2571]: E0508 00:40:01.859384 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.859427 kubelet[2571]: W0508 00:40:01.859419 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.859535 kubelet[2571]: E0508 00:40:01.859455 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.859847 kubelet[2571]: E0508 00:40:01.859830 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.859847 kubelet[2571]: W0508 00:40:01.859842 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.859912 kubelet[2571]: E0508 00:40:01.859857 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.860087 kubelet[2571]: E0508 00:40:01.860072 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.860087 kubelet[2571]: W0508 00:40:01.860083 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.860158 kubelet[2571]: E0508 00:40:01.860129 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.860290 kubelet[2571]: E0508 00:40:01.860274 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.860290 kubelet[2571]: W0508 00:40:01.860286 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.860343 kubelet[2571]: E0508 00:40:01.860323 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.860507 kubelet[2571]: E0508 00:40:01.860485 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.860507 kubelet[2571]: W0508 00:40:01.860498 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.860557 kubelet[2571]: E0508 00:40:01.860512 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.860801 kubelet[2571]: E0508 00:40:01.860784 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.860801 kubelet[2571]: W0508 00:40:01.860798 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.860860 kubelet[2571]: E0508 00:40:01.860817 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.861040 kubelet[2571]: E0508 00:40:01.861024 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.861040 kubelet[2571]: W0508 00:40:01.861035 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.861091 kubelet[2571]: E0508 00:40:01.861050 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.861275 kubelet[2571]: E0508 00:40:01.861258 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.861275 kubelet[2571]: W0508 00:40:01.861273 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.861321 kubelet[2571]: E0508 00:40:01.861289 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.861508 kubelet[2571]: E0508 00:40:01.861493 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.861508 kubelet[2571]: W0508 00:40:01.861505 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.861557 kubelet[2571]: E0508 00:40:01.861520 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.861753 kubelet[2571]: E0508 00:40:01.861738 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.861753 kubelet[2571]: W0508 00:40:01.861750 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.861818 kubelet[2571]: E0508 00:40:01.861766 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.862059 kubelet[2571]: E0508 00:40:01.862037 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.862087 kubelet[2571]: W0508 00:40:01.862054 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.862087 kubelet[2571]: E0508 00:40:01.862076 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.862276 kubelet[2571]: E0508 00:40:01.862258 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.862276 kubelet[2571]: W0508 00:40:01.862268 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.862332 kubelet[2571]: E0508 00:40:01.862301 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.862520 kubelet[2571]: E0508 00:40:01.862503 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.862520 kubelet[2571]: W0508 00:40:01.862514 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.862560 kubelet[2571]: E0508 00:40:01.862548 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.862753 kubelet[2571]: E0508 00:40:01.862741 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.862753 kubelet[2571]: W0508 00:40:01.862751 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.862826 kubelet[2571]: E0508 00:40:01.862793 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.862972 kubelet[2571]: E0508 00:40:01.862954 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.862972 kubelet[2571]: W0508 00:40:01.862965 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.863020 kubelet[2571]: E0508 00:40:01.862997 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.863156 kubelet[2571]: E0508 00:40:01.863145 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.863156 kubelet[2571]: W0508 00:40:01.863155 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.863203 kubelet[2571]: E0508 00:40:01.863180 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.863356 kubelet[2571]: E0508 00:40:01.863345 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.863356 kubelet[2571]: W0508 00:40:01.863355 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.863402 kubelet[2571]: E0508 00:40:01.863369 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.863573 kubelet[2571]: E0508 00:40:01.863561 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.863573 kubelet[2571]: W0508 00:40:01.863571 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.863612 kubelet[2571]: E0508 00:40:01.863583 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.863823 kubelet[2571]: E0508 00:40:01.863807 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.863823 kubelet[2571]: W0508 00:40:01.863820 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.863877 kubelet[2571]: E0508 00:40:01.863835 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.864049 kubelet[2571]: E0508 00:40:01.864035 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.864049 kubelet[2571]: W0508 00:40:01.864047 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.864096 kubelet[2571]: E0508 00:40:01.864062 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.864354 kubelet[2571]: E0508 00:40:01.864341 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.864354 kubelet[2571]: W0508 00:40:01.864351 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.864419 kubelet[2571]: E0508 00:40:01.864366 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.864619 kubelet[2571]: E0508 00:40:01.864601 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.864619 kubelet[2571]: W0508 00:40:01.864616 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.864718 kubelet[2571]: E0508 00:40:01.864633 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.864929 kubelet[2571]: E0508 00:40:01.864913 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.864929 kubelet[2571]: W0508 00:40:01.864925 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.864981 kubelet[2571]: E0508 00:40:01.864939 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.865174 kubelet[2571]: E0508 00:40:01.865146 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.865174 kubelet[2571]: W0508 00:40:01.865159 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.865174 kubelet[2571]: E0508 00:40:01.865169 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.879323 kubelet[2571]: E0508 00:40:01.879294 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.879886 containerd[1464]: time="2025-05-08T00:40:01.879845176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6ln6s,Uid:7a677f72-c97f-4200-b025-a92d332a0441,Namespace:calico-system,Attempt:0,}" May 8 00:40:01.926382 kubelet[2571]: E0508 00:40:01.926329 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:01.926382 kubelet[2571]: W0508 00:40:01.926358 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:01.926382 kubelet[2571]: E0508 00:40:01.926384 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:01.948473 containerd[1464]: time="2025-05-08T00:40:01.948042300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:01.948473 containerd[1464]: time="2025-05-08T00:40:01.948100349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:01.948473 containerd[1464]: time="2025-05-08T00:40:01.948118524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:01.948473 containerd[1464]: time="2025-05-08T00:40:01.948268726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:01.969972 systemd[1]: Started cri-containerd-fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c.scope - libcontainer container fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c. May 8 00:40:02.002612 containerd[1464]: time="2025-05-08T00:40:02.002488587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6ln6s,Uid:7a677f72-c97f-4200-b025-a92d332a0441,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c\"" May 8 00:40:02.003497 kubelet[2571]: E0508 00:40:02.003469 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:02.467848 systemd[1]: run-containerd-runc-k8s.io-718164f515abaf7ec3caf055c0a6429d687b3ba799759211d9a3ae7e663e72b2-runc.sb0U40.mount: Deactivated successfully. May 8 00:40:02.927782 kubelet[2571]: E0508 00:40:02.927618 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:03.400627 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:53490.service - OpenSSH per-connection server daemon (10.0.0.1:53490). May 8 00:40:03.442992 sshd[3113]: Accepted publickey for core from 10.0.0.1 port 53490 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:03.444911 sshd[3113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:03.450172 systemd-logind[1453]: New session 8 of user core. May 8 00:40:03.463826 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:40:03.583734 sshd[3113]: pam_unix(sshd:session): session closed for user core May 8 00:40:03.588347 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:53490.service: Deactivated successfully. May 8 00:40:03.590216 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:40:03.590899 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. May 8 00:40:03.591901 systemd-logind[1453]: Removed session 8. May 8 00:40:04.929024 kubelet[2571]: E0508 00:40:04.928880 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:05.179170 containerd[1464]: time="2025-05-08T00:40:05.178993305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:05.183979 containerd[1464]: time="2025-05-08T00:40:05.183896628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:40:05.186435 containerd[1464]: time="2025-05-08T00:40:05.186399352Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:05.193241 containerd[1464]: time="2025-05-08T00:40:05.193194107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:05.193920 containerd[1464]: time="2025-05-08T00:40:05.193886341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.468768456s" May 8 00:40:05.193920 containerd[1464]: time="2025-05-08T00:40:05.193921627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:40:05.195189 containerd[1464]: time="2025-05-08T00:40:05.195147335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:40:05.204807 containerd[1464]: time="2025-05-08T00:40:05.204758845Z" level=info msg="CreateContainer within sandbox \"718164f515abaf7ec3caf055c0a6429d687b3ba799759211d9a3ae7e663e72b2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:40:05.271917 containerd[1464]: time="2025-05-08T00:40:05.271858453Z" level=info msg="CreateContainer within sandbox \"718164f515abaf7ec3caf055c0a6429d687b3ba799759211d9a3ae7e663e72b2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b8ee72a8cb296c45254bfdfbce95bf21144efeb5b06554707984bbf29a3e9c8e\"" May 8 00:40:05.272605 containerd[1464]: time="2025-05-08T00:40:05.272574682Z" level=info msg="StartContainer for \"b8ee72a8cb296c45254bfdfbce95bf21144efeb5b06554707984bbf29a3e9c8e\"" May 8 00:40:05.310815 systemd[1]: Started cri-containerd-b8ee72a8cb296c45254bfdfbce95bf21144efeb5b06554707984bbf29a3e9c8e.scope - libcontainer container b8ee72a8cb296c45254bfdfbce95bf21144efeb5b06554707984bbf29a3e9c8e. May 8 00:40:05.421532 containerd[1464]: time="2025-05-08T00:40:05.421462698Z" level=info msg="StartContainer for \"b8ee72a8cb296c45254bfdfbce95bf21144efeb5b06554707984bbf29a3e9c8e\" returns successfully" May 8 00:40:05.989341 kubelet[2571]: E0508 00:40:05.989289 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:06.000295 kubelet[2571]: I0508 00:40:05.999978 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f5cf95bdf-fpwgm" podStartSLOduration=1.530167483 podStartE2EDuration="4.999954568s" podCreationTimestamp="2025-05-08 00:40:01 +0000 UTC" firstStartedPulling="2025-05-08 00:40:01.724880658 +0000 UTC m=+24.889143534" lastFinishedPulling="2025-05-08 00:40:05.194667743 +0000 UTC m=+28.358930619" observedRunningTime="2025-05-08 00:40:05.999396187 +0000 UTC m=+29.163659083" watchObservedRunningTime="2025-05-08 00:40:05.999954568 +0000 UTC m=+29.164217444" May 8 00:40:06.080526 kubelet[2571]: E0508 00:40:06.080459 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.080526 kubelet[2571]: W0508 00:40:06.080494 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.080526 kubelet[2571]: E0508 00:40:06.080522 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.080840 kubelet[2571]: E0508 00:40:06.080774 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.080840 kubelet[2571]: W0508 00:40:06.080786 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.080840 kubelet[2571]: E0508 00:40:06.080798 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.081049 kubelet[2571]: E0508 00:40:06.081017 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.081049 kubelet[2571]: W0508 00:40:06.081030 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.081049 kubelet[2571]: E0508 00:40:06.081041 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.081284 kubelet[2571]: E0508 00:40:06.081253 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.081284 kubelet[2571]: W0508 00:40:06.081266 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.081284 kubelet[2571]: E0508 00:40:06.081276 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.081523 kubelet[2571]: E0508 00:40:06.081493 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.081523 kubelet[2571]: W0508 00:40:06.081506 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.081523 kubelet[2571]: E0508 00:40:06.081516 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.081792 kubelet[2571]: E0508 00:40:06.081740 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.081792 kubelet[2571]: W0508 00:40:06.081753 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.081792 kubelet[2571]: E0508 00:40:06.081764 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.082010 kubelet[2571]: E0508 00:40:06.081965 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.082010 kubelet[2571]: W0508 00:40:06.081975 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.082010 kubelet[2571]: E0508 00:40:06.081986 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.082200 kubelet[2571]: E0508 00:40:06.082177 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.082200 kubelet[2571]: W0508 00:40:06.082190 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.082200 kubelet[2571]: E0508 00:40:06.082200 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.082444 kubelet[2571]: E0508 00:40:06.082412 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.082444 kubelet[2571]: W0508 00:40:06.082427 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.082444 kubelet[2571]: E0508 00:40:06.082438 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.082651 kubelet[2571]: E0508 00:40:06.082629 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.082651 kubelet[2571]: W0508 00:40:06.082641 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.082651 kubelet[2571]: E0508 00:40:06.082652 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.082896 kubelet[2571]: E0508 00:40:06.082874 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.082896 kubelet[2571]: W0508 00:40:06.082889 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.083146 kubelet[2571]: E0508 00:40:06.082901 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.083197 kubelet[2571]: E0508 00:40:06.083165 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.083197 kubelet[2571]: W0508 00:40:06.083176 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.083197 kubelet[2571]: E0508 00:40:06.083188 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.083423 kubelet[2571]: E0508 00:40:06.083400 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.083423 kubelet[2571]: W0508 00:40:06.083414 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.083423 kubelet[2571]: E0508 00:40:06.083425 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.083645 kubelet[2571]: E0508 00:40:06.083623 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.083645 kubelet[2571]: W0508 00:40:06.083635 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.083645 kubelet[2571]: E0508 00:40:06.083646 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.083883 kubelet[2571]: E0508 00:40:06.083863 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.083883 kubelet[2571]: W0508 00:40:06.083876 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.083973 kubelet[2571]: E0508 00:40:06.083886 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.092362 kubelet[2571]: E0508 00:40:06.092320 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.092362 kubelet[2571]: W0508 00:40:06.092344 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.092362 kubelet[2571]: E0508 00:40:06.092365 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.092684 kubelet[2571]: E0508 00:40:06.092648 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.092684 kubelet[2571]: W0508 00:40:06.092677 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.092774 kubelet[2571]: E0508 00:40:06.092696 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.092970 kubelet[2571]: E0508 00:40:06.092953 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.092970 kubelet[2571]: W0508 00:40:06.092965 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.093055 kubelet[2571]: E0508 00:40:06.092981 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.093243 kubelet[2571]: E0508 00:40:06.093208 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.093243 kubelet[2571]: W0508 00:40:06.093227 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.093326 kubelet[2571]: E0508 00:40:06.093245 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.093495 kubelet[2571]: E0508 00:40:06.093465 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.093495 kubelet[2571]: W0508 00:40:06.093478 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.093495 kubelet[2571]: E0508 00:40:06.093493 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.093767 kubelet[2571]: E0508 00:40:06.093743 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.093767 kubelet[2571]: W0508 00:40:06.093756 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.093852 kubelet[2571]: E0508 00:40:06.093774 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.094063 kubelet[2571]: E0508 00:40:06.094031 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.094063 kubelet[2571]: W0508 00:40:06.094045 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.094063 kubelet[2571]: E0508 00:40:06.094061 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.094383 kubelet[2571]: E0508 00:40:06.094361 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.094383 kubelet[2571]: W0508 00:40:06.094378 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.094476 kubelet[2571]: E0508 00:40:06.094394 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.094627 kubelet[2571]: E0508 00:40:06.094604 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.094627 kubelet[2571]: W0508 00:40:06.094617 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.094743 kubelet[2571]: E0508 00:40:06.094633 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.094897 kubelet[2571]: E0508 00:40:06.094873 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.094897 kubelet[2571]: W0508 00:40:06.094888 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.094973 kubelet[2571]: E0508 00:40:06.094904 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.095142 kubelet[2571]: E0508 00:40:06.095120 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.095142 kubelet[2571]: W0508 00:40:06.095132 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.095222 kubelet[2571]: E0508 00:40:06.095148 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.095401 kubelet[2571]: E0508 00:40:06.095378 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.095401 kubelet[2571]: W0508 00:40:06.095390 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.095482 kubelet[2571]: E0508 00:40:06.095407 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.095743 kubelet[2571]: E0508 00:40:06.095720 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.095743 kubelet[2571]: W0508 00:40:06.095736 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.095820 kubelet[2571]: E0508 00:40:06.095752 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.096006 kubelet[2571]: E0508 00:40:06.095987 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.096006 kubelet[2571]: W0508 00:40:06.096001 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.096072 kubelet[2571]: E0508 00:40:06.096016 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.096252 kubelet[2571]: E0508 00:40:06.096233 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.096252 kubelet[2571]: W0508 00:40:06.096246 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.096314 kubelet[2571]: E0508 00:40:06.096260 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.096503 kubelet[2571]: E0508 00:40:06.096484 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.096503 kubelet[2571]: W0508 00:40:06.096499 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.096566 kubelet[2571]: E0508 00:40:06.096514 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.096848 kubelet[2571]: E0508 00:40:06.096814 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.096848 kubelet[2571]: W0508 00:40:06.096829 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.096848 kubelet[2571]: E0508 00:40:06.096841 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.097089 kubelet[2571]: E0508 00:40:06.097064 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:06.097089 kubelet[2571]: W0508 00:40:06.097079 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:06.097138 kubelet[2571]: E0508 00:40:06.097090 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:06.696764 containerd[1464]: time="2025-05-08T00:40:06.696700724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.697520 containerd[1464]: time="2025-05-08T00:40:06.697475753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:40:06.698601 containerd[1464]: time="2025-05-08T00:40:06.698566436Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.700614 containerd[1464]: time="2025-05-08T00:40:06.700578725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.701413 containerd[1464]: time="2025-05-08T00:40:06.701370426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.506175471s" May 8 00:40:06.701450 containerd[1464]: time="2025-05-08T00:40:06.701414388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:40:06.703809 containerd[1464]: time="2025-05-08T00:40:06.703776396Z" level=info msg="CreateContainer within sandbox \"fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:40:06.721264 containerd[1464]: time="2025-05-08T00:40:06.721206150Z" level=info msg="CreateContainer within sandbox \"fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8\"" May 8 00:40:06.721809 containerd[1464]: time="2025-05-08T00:40:06.721769901Z" level=info msg="StartContainer for \"9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8\"" May 8 00:40:06.759819 systemd[1]: Started cri-containerd-9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8.scope - libcontainer container 9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8. May 8 00:40:06.792460 containerd[1464]: time="2025-05-08T00:40:06.792390116Z" level=info msg="StartContainer for \"9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8\" returns successfully" May 8 00:40:06.805587 systemd[1]: cri-containerd-9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8.scope: Deactivated successfully. May 8 00:40:06.928092 kubelet[2571]: E0508 00:40:06.928025 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:06.991385 kubelet[2571]: I0508 00:40:06.991344 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:06.991968 kubelet[2571]: E0508 00:40:06.991758 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:06.992177 kubelet[2571]: E0508 00:40:06.992129 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:07.090909 containerd[1464]: time="2025-05-08T00:40:07.090798653Z" level=info msg="shim disconnected" id=9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8 namespace=k8s.io May 8 00:40:07.090909 containerd[1464]: time="2025-05-08T00:40:07.090893622Z" level=warning msg="cleaning up after shim disconnected" id=9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8 namespace=k8s.io May 8 00:40:07.090909 containerd[1464]: time="2025-05-08T00:40:07.090907548Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:07.202764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e5318878c3e36cce71fe1900e7781dec47f1478628c8508db85b41bbde227f8-rootfs.mount: Deactivated successfully. May 8 00:40:07.994731 kubelet[2571]: E0508 00:40:07.994689 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:07.995407 containerd[1464]: time="2025-05-08T00:40:07.995368870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:40:08.600817 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:43960.service - OpenSSH per-connection server daemon (10.0.0.1:43960). May 8 00:40:08.636001 sshd[3307]: Accepted publickey for core from 10.0.0.1 port 43960 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:08.637541 sshd[3307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:08.641532 systemd-logind[1453]: New session 9 of user core. May 8 00:40:08.653820 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:40:08.760311 sshd[3307]: pam_unix(sshd:session): session closed for user core May 8 00:40:08.764542 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:43960.service: Deactivated successfully. May 8 00:40:08.766428 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:40:08.767110 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. May 8 00:40:08.767982 systemd-logind[1453]: Removed session 9. May 8 00:40:08.928274 kubelet[2571]: E0508 00:40:08.928139 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:10.927602 kubelet[2571]: E0508 00:40:10.927543 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:12.928221 kubelet[2571]: E0508 00:40:12.928156 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:13.262897 containerd[1464]: time="2025-05-08T00:40:13.262844580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:13.263609 containerd[1464]: time="2025-05-08T00:40:13.263555857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:40:13.264716 containerd[1464]: time="2025-05-08T00:40:13.264685231Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:13.266693 containerd[1464]: time="2025-05-08T00:40:13.266652179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:13.267367 containerd[1464]: time="2025-05-08T00:40:13.267329161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.271916549s" May 8 00:40:13.267367 containerd[1464]: time="2025-05-08T00:40:13.267358647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:40:13.283852 containerd[1464]: time="2025-05-08T00:40:13.283795741Z" level=info msg="CreateContainer within sandbox \"fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:40:13.298044 containerd[1464]: time="2025-05-08T00:40:13.297988425Z" level=info msg="CreateContainer within sandbox \"fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1\"" May 8 00:40:13.301601 containerd[1464]: time="2025-05-08T00:40:13.301524624Z" level=info msg="StartContainer for \"b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1\"" May 8 00:40:13.334844 systemd[1]: Started cri-containerd-b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1.scope - libcontainer container b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1. May 8 00:40:13.365465 containerd[1464]: time="2025-05-08T00:40:13.365382646Z" level=info msg="StartContainer for \"b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1\" returns successfully" May 8 00:40:13.776358 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:43976.service - OpenSSH per-connection server daemon (10.0.0.1:43976). May 8 00:40:14.089903 kubelet[2571]: E0508 00:40:14.089619 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.113332 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 43976 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:14.115504 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:14.123046 systemd-logind[1453]: New session 10 of user core. May 8 00:40:14.128927 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:40:14.281283 sshd[3365]: pam_unix(sshd:session): session closed for user core May 8 00:40:14.286819 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:43976.service: Deactivated successfully. May 8 00:40:14.289787 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:40:14.290580 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. May 8 00:40:14.291636 systemd-logind[1453]: Removed session 10. May 8 00:40:14.926871 systemd[1]: cri-containerd-b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1.scope: Deactivated successfully. May 8 00:40:14.928057 kubelet[2571]: E0508 00:40:14.927743 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:14.949500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1-rootfs.mount: Deactivated successfully. May 8 00:40:15.008246 kubelet[2571]: I0508 00:40:15.008203 2571 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:40:15.089459 kubelet[2571]: E0508 00:40:15.089411 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:15.102802 containerd[1464]: time="2025-05-08T00:40:15.102711683Z" level=info msg="shim disconnected" id=b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1 namespace=k8s.io May 8 00:40:15.102802 containerd[1464]: time="2025-05-08T00:40:15.102796281Z" level=warning msg="cleaning up after shim disconnected" id=b4bc381e7102ff22e97d624fa576b476c2dd4ce1f0e81f17275d5339edcad7a1 namespace=k8s.io May 8 00:40:15.102802 containerd[1464]: time="2025-05-08T00:40:15.102808825Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:15.108431 kubelet[2571]: I0508 00:40:15.107904 2571 topology_manager.go:215] "Topology Admit Handler" podUID="5e801bca-4ca7-4f8e-baa8-230995c21235" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pq6w8" May 8 00:40:15.108431 kubelet[2571]: I0508 00:40:15.108397 2571 topology_manager.go:215] "Topology Admit Handler" podUID="8512146f-a4f2-4ad6-9a28-559c237b8730" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9584k" May 8 00:40:15.112474 kubelet[2571]: I0508 00:40:15.111990 2571 topology_manager.go:215] "Topology Admit Handler" podUID="9d13fad7-9b52-4242-8c83-7d9a65d72e32" podNamespace="calico-apiserver" podName="calico-apiserver-7f5d787db9-gdzhq" May 8 00:40:15.112474 kubelet[2571]: I0508 00:40:15.112139 2571 topology_manager.go:215] "Topology Admit Handler" podUID="6f09368a-85bc-4ff8-a22a-5897ae61119a" podNamespace="calico-system" podName="calico-kube-controllers-7bb4ddbd59-kfxqq" May 8 00:40:15.112474 kubelet[2571]: I0508 00:40:15.112254 2571 topology_manager.go:215] "Topology Admit Handler" podUID="fbce2436-d8e1-4ed5-8f00-79e6a1ac4517" podNamespace="calico-apiserver" podName="calico-apiserver-7f5d787db9-8lqnx" May 8 00:40:15.126112 systemd[1]: Created slice kubepods-burstable-pod8512146f_a4f2_4ad6_9a28_559c237b8730.slice - libcontainer container kubepods-burstable-pod8512146f_a4f2_4ad6_9a28_559c237b8730.slice. May 8 00:40:15.134208 systemd[1]: Created slice kubepods-burstable-pod5e801bca_4ca7_4f8e_baa8_230995c21235.slice - libcontainer container kubepods-burstable-pod5e801bca_4ca7_4f8e_baa8_230995c21235.slice. May 8 00:40:15.144268 systemd[1]: Created slice kubepods-besteffort-podfbce2436_d8e1_4ed5_8f00_79e6a1ac4517.slice - libcontainer container kubepods-besteffort-podfbce2436_d8e1_4ed5_8f00_79e6a1ac4517.slice. May 8 00:40:15.152428 systemd[1]: Created slice kubepods-besteffort-pod9d13fad7_9b52_4242_8c83_7d9a65d72e32.slice - libcontainer container kubepods-besteffort-pod9d13fad7_9b52_4242_8c83_7d9a65d72e32.slice. May 8 00:40:15.164884 systemd[1]: Created slice kubepods-besteffort-pod6f09368a_85bc_4ff8_a22a_5897ae61119a.slice - libcontainer container kubepods-besteffort-pod6f09368a_85bc_4ff8_a22a_5897ae61119a.slice. May 8 00:40:15.290534 kubelet[2571]: I0508 00:40:15.290463 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d13fad7-9b52-4242-8c83-7d9a65d72e32-calico-apiserver-certs\") pod \"calico-apiserver-7f5d787db9-gdzhq\" (UID: \"9d13fad7-9b52-4242-8c83-7d9a65d72e32\") " pod="calico-apiserver/calico-apiserver-7f5d787db9-gdzhq" May 8 00:40:15.290534 kubelet[2571]: I0508 00:40:15.290530 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2n5n\" (UniqueName: \"kubernetes.io/projected/8512146f-a4f2-4ad6-9a28-559c237b8730-kube-api-access-m2n5n\") pod \"coredns-7db6d8ff4d-9584k\" (UID: \"8512146f-a4f2-4ad6-9a28-559c237b8730\") " pod="kube-system/coredns-7db6d8ff4d-9584k" May 8 00:40:15.290534 kubelet[2571]: I0508 00:40:15.290557 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gztl\" (UniqueName: \"kubernetes.io/projected/fbce2436-d8e1-4ed5-8f00-79e6a1ac4517-kube-api-access-2gztl\") pod \"calico-apiserver-7f5d787db9-8lqnx\" (UID: \"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517\") " pod="calico-apiserver/calico-apiserver-7f5d787db9-8lqnx" May 8 00:40:15.290800 kubelet[2571]: I0508 00:40:15.290579 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8512146f-a4f2-4ad6-9a28-559c237b8730-config-volume\") pod \"coredns-7db6d8ff4d-9584k\" (UID: \"8512146f-a4f2-4ad6-9a28-559c237b8730\") " pod="kube-system/coredns-7db6d8ff4d-9584k" May 8 00:40:15.290800 kubelet[2571]: I0508 00:40:15.290607 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ksvn\" (UniqueName: \"kubernetes.io/projected/9d13fad7-9b52-4242-8c83-7d9a65d72e32-kube-api-access-2ksvn\") pod \"calico-apiserver-7f5d787db9-gdzhq\" (UID: \"9d13fad7-9b52-4242-8c83-7d9a65d72e32\") " pod="calico-apiserver/calico-apiserver-7f5d787db9-gdzhq" May 8 00:40:15.290800 kubelet[2571]: I0508 00:40:15.290626 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e801bca-4ca7-4f8e-baa8-230995c21235-config-volume\") pod \"coredns-7db6d8ff4d-pq6w8\" (UID: \"5e801bca-4ca7-4f8e-baa8-230995c21235\") " pod="kube-system/coredns-7db6d8ff4d-pq6w8" May 8 00:40:15.290800 kubelet[2571]: I0508 00:40:15.290640 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qj29\" (UniqueName: \"kubernetes.io/projected/5e801bca-4ca7-4f8e-baa8-230995c21235-kube-api-access-6qj29\") pod \"coredns-7db6d8ff4d-pq6w8\" (UID: \"5e801bca-4ca7-4f8e-baa8-230995c21235\") " pod="kube-system/coredns-7db6d8ff4d-pq6w8" May 8 00:40:15.290800 kubelet[2571]: I0508 00:40:15.290775 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f09368a-85bc-4ff8-a22a-5897ae61119a-tigera-ca-bundle\") pod \"calico-kube-controllers-7bb4ddbd59-kfxqq\" (UID: \"6f09368a-85bc-4ff8-a22a-5897ae61119a\") " pod="calico-system/calico-kube-controllers-7bb4ddbd59-kfxqq" May 8 00:40:15.290922 kubelet[2571]: I0508 00:40:15.290817 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588xl\" (UniqueName: \"kubernetes.io/projected/6f09368a-85bc-4ff8-a22a-5897ae61119a-kube-api-access-588xl\") pod \"calico-kube-controllers-7bb4ddbd59-kfxqq\" (UID: \"6f09368a-85bc-4ff8-a22a-5897ae61119a\") " pod="calico-system/calico-kube-controllers-7bb4ddbd59-kfxqq" May 8 00:40:15.290922 kubelet[2571]: I0508 00:40:15.290846 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fbce2436-d8e1-4ed5-8f00-79e6a1ac4517-calico-apiserver-certs\") pod \"calico-apiserver-7f5d787db9-8lqnx\" (UID: \"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517\") " pod="calico-apiserver/calico-apiserver-7f5d787db9-8lqnx" May 8 00:40:15.432442 kubelet[2571]: E0508 00:40:15.432386 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:15.433796 containerd[1464]: time="2025-05-08T00:40:15.433750736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9584k,Uid:8512146f-a4f2-4ad6-9a28-559c237b8730,Namespace:kube-system,Attempt:0,}" May 8 00:40:15.441155 kubelet[2571]: E0508 00:40:15.441056 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:15.441605 containerd[1464]: time="2025-05-08T00:40:15.441552560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pq6w8,Uid:5e801bca-4ca7-4f8e-baa8-230995c21235,Namespace:kube-system,Attempt:0,}" May 8 00:40:15.449612 containerd[1464]: time="2025-05-08T00:40:15.449571413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-8lqnx,Uid:fbce2436-d8e1-4ed5-8f00-79e6a1ac4517,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:15.458564 containerd[1464]: time="2025-05-08T00:40:15.458493252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-gdzhq,Uid:9d13fad7-9b52-4242-8c83-7d9a65d72e32,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:15.470070 containerd[1464]: time="2025-05-08T00:40:15.470028343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb4ddbd59-kfxqq,Uid:6f09368a-85bc-4ff8-a22a-5897ae61119a,Namespace:calico-system,Attempt:0,}" May 8 00:40:15.955270 containerd[1464]: time="2025-05-08T00:40:15.950775019Z" level=error msg="Failed to destroy network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.955270 containerd[1464]: time="2025-05-08T00:40:15.951901656Z" level=error msg="encountered an error cleaning up failed sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.955270 containerd[1464]: time="2025-05-08T00:40:15.952045908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-8lqnx,Uid:fbce2436-d8e1-4ed5-8f00-79e6a1ac4517,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.955270 containerd[1464]: time="2025-05-08T00:40:15.955216938Z" level=error msg="Failed to destroy network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.955612 kubelet[2571]: E0508 00:40:15.952408 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.955612 kubelet[2571]: E0508 00:40:15.952483 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5d787db9-8lqnx" May 8 00:40:15.955612 kubelet[2571]: E0508 00:40:15.952509 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5d787db9-8lqnx" May 8 00:40:15.959335 kubelet[2571]: E0508 00:40:15.952558 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5d787db9-8lqnx_calico-apiserver(fbce2436-d8e1-4ed5-8f00-79e6a1ac4517)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5d787db9-8lqnx_calico-apiserver(fbce2436-d8e1-4ed5-8f00-79e6a1ac4517)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5d787db9-8lqnx" podUID="fbce2436-d8e1-4ed5-8f00-79e6a1ac4517" May 8 00:40:15.959335 kubelet[2571]: E0508 00:40:15.956111 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.959335 kubelet[2571]: E0508 00:40:15.956169 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9584k" May 8 00:40:15.959499 containerd[1464]: time="2025-05-08T00:40:15.955720214Z" level=error msg="encountered an error cleaning up failed sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.959499 containerd[1464]: time="2025-05-08T00:40:15.955790856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9584k,Uid:8512146f-a4f2-4ad6-9a28-559c237b8730,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.959586 kubelet[2571]: E0508 00:40:15.956191 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9584k" May 8 00:40:15.959586 kubelet[2571]: E0508 00:40:15.956252 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9584k_kube-system(8512146f-a4f2-4ad6-9a28-559c237b8730)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9584k_kube-system(8512146f-a4f2-4ad6-9a28-559c237b8730)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9584k" podUID="8512146f-a4f2-4ad6-9a28-559c237b8730" May 8 00:40:15.962748 containerd[1464]: time="2025-05-08T00:40:15.962685185Z" level=error msg="Failed to destroy network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.962837 containerd[1464]: time="2025-05-08T00:40:15.962774153Z" level=error msg="Failed to destroy network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.964372 containerd[1464]: time="2025-05-08T00:40:15.964334766Z" level=error msg="encountered an error cleaning up failed sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.964417 containerd[1464]: time="2025-05-08T00:40:15.964395420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pq6w8,Uid:5e801bca-4ca7-4f8e-baa8-230995c21235,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.965686 kubelet[2571]: E0508 00:40:15.964638 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.965686 kubelet[2571]: E0508 00:40:15.964735 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pq6w8" May 8 00:40:15.965686 kubelet[2571]: E0508 00:40:15.964755 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pq6w8" May 8 00:40:15.965807 containerd[1464]: time="2025-05-08T00:40:15.965397734Z" level=error msg="Failed to destroy network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.966763 containerd[1464]: time="2025-05-08T00:40:15.965792125Z" level=error msg="encountered an error cleaning up failed sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.966763 containerd[1464]: time="2025-05-08T00:40:15.965890640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb4ddbd59-kfxqq,Uid:6f09368a-85bc-4ff8-a22a-5897ae61119a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.966867 kubelet[2571]: E0508 00:40:15.966165 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.966867 kubelet[2571]: E0508 00:40:15.966190 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb4ddbd59-kfxqq" May 8 00:40:15.966867 kubelet[2571]: E0508 00:40:15.966205 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb4ddbd59-kfxqq" May 8 00:40:15.967545 kubelet[2571]: E0508 00:40:15.967396 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pq6w8_kube-system(5e801bca-4ca7-4f8e-baa8-230995c21235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pq6w8_kube-system(5e801bca-4ca7-4f8e-baa8-230995c21235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pq6w8" podUID="5e801bca-4ca7-4f8e-baa8-230995c21235" May 8 00:40:15.967841 kubelet[2571]: E0508 00:40:15.967755 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bb4ddbd59-kfxqq_calico-system(6f09368a-85bc-4ff8-a22a-5897ae61119a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bb4ddbd59-kfxqq_calico-system(6f09368a-85bc-4ff8-a22a-5897ae61119a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb4ddbd59-kfxqq" podUID="6f09368a-85bc-4ff8-a22a-5897ae61119a" May 8 00:40:15.968876 containerd[1464]: time="2025-05-08T00:40:15.968717343Z" level=error msg="encountered an error cleaning up failed sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.968876 containerd[1464]: time="2025-05-08T00:40:15.968771215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-gdzhq,Uid:9d13fad7-9b52-4242-8c83-7d9a65d72e32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.968995 kubelet[2571]: E0508 00:40:15.968966 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:15.969040 kubelet[2571]: E0508 00:40:15.969015 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5d787db9-gdzhq" May 8 00:40:15.969040 kubelet[2571]: E0508 00:40:15.969036 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5d787db9-gdzhq" May 8 00:40:15.969109 kubelet[2571]: E0508 00:40:15.969078 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5d787db9-gdzhq_calico-apiserver(9d13fad7-9b52-4242-8c83-7d9a65d72e32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5d787db9-gdzhq_calico-apiserver(9d13fad7-9b52-4242-8c83-7d9a65d72e32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5d787db9-gdzhq" podUID="9d13fad7-9b52-4242-8c83-7d9a65d72e32" May 8 00:40:15.969832 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd-shm.mount: Deactivated successfully. May 8 00:40:15.969983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5-shm.mount: Deactivated successfully. May 8 00:40:15.970086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b-shm.mount: Deactivated successfully. May 8 00:40:15.970188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0-shm.mount: Deactivated successfully. May 8 00:40:15.970305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83-shm.mount: Deactivated successfully. May 8 00:40:16.091812 kubelet[2571]: I0508 00:40:16.091774 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:16.092618 containerd[1464]: time="2025-05-08T00:40:16.092552582Z" level=info msg="StopPodSandbox for \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\"" May 8 00:40:16.092838 containerd[1464]: time="2025-05-08T00:40:16.092803073Z" level=info msg="Ensure that sandbox b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd in task-service has been cleanup successfully" May 8 00:40:16.093571 kubelet[2571]: I0508 00:40:16.093534 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:16.094243 containerd[1464]: time="2025-05-08T00:40:16.094215758Z" level=info msg="StopPodSandbox for \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\"" May 8 00:40:16.094623 containerd[1464]: time="2025-05-08T00:40:16.094588258Z" level=info msg="Ensure that sandbox 5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5 in task-service has been cleanup successfully" May 8 00:40:16.095432 kubelet[2571]: I0508 00:40:16.095410 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:16.095929 containerd[1464]: time="2025-05-08T00:40:16.095900405Z" level=info msg="StopPodSandbox for \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\"" May 8 00:40:16.096278 containerd[1464]: time="2025-05-08T00:40:16.096221778Z" level=info msg="Ensure that sandbox aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b in task-service has been cleanup successfully" May 8 00:40:16.098410 kubelet[2571]: I0508 00:40:16.098383 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:16.099100 containerd[1464]: time="2025-05-08T00:40:16.099064360Z" level=info msg="StopPodSandbox for \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\"" May 8 00:40:16.099594 containerd[1464]: time="2025-05-08T00:40:16.099371638Z" level=info msg="Ensure that sandbox 81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0 in task-service has been cleanup successfully" May 8 00:40:16.101085 kubelet[2571]: I0508 00:40:16.101058 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:16.102866 containerd[1464]: time="2025-05-08T00:40:16.102833384Z" level=info msg="StopPodSandbox for \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\"" May 8 00:40:16.103179 containerd[1464]: time="2025-05-08T00:40:16.102992583Z" level=info msg="Ensure that sandbox eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83 in task-service has been cleanup successfully" May 8 00:40:16.106352 kubelet[2571]: E0508 00:40:16.106269 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:16.107636 containerd[1464]: time="2025-05-08T00:40:16.107573993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:40:16.135210 containerd[1464]: time="2025-05-08T00:40:16.135132816Z" level=error msg="StopPodSandbox for \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\" failed" error="failed to destroy network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:16.135774 kubelet[2571]: E0508 00:40:16.135630 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:16.136262 kubelet[2571]: E0508 00:40:16.135740 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd"} May 8 00:40:16.136262 kubelet[2571]: E0508 00:40:16.136198 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f09368a-85bc-4ff8-a22a-5897ae61119a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:16.136262 kubelet[2571]: E0508 00:40:16.136223 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f09368a-85bc-4ff8-a22a-5897ae61119a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb4ddbd59-kfxqq" podUID="6f09368a-85bc-4ff8-a22a-5897ae61119a" May 8 00:40:16.137844 containerd[1464]: time="2025-05-08T00:40:16.137046924Z" level=error msg="StopPodSandbox for \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\" failed" error="failed to destroy network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:16.138001 kubelet[2571]: E0508 00:40:16.137197 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:16.138001 kubelet[2571]: E0508 00:40:16.137226 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b"} May 8 00:40:16.138001 kubelet[2571]: E0508 00:40:16.137248 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:16.138001 kubelet[2571]: E0508 00:40:16.137266 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5d787db9-8lqnx" podUID="fbce2436-d8e1-4ed5-8f00-79e6a1ac4517" May 8 00:40:16.142556 containerd[1464]: time="2025-05-08T00:40:16.142403842Z" level=error msg="StopPodSandbox for \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\" failed" error="failed to destroy network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:16.142689 kubelet[2571]: E0508 00:40:16.142636 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:16.142743 kubelet[2571]: E0508 00:40:16.142702 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5"} May 8 00:40:16.142743 kubelet[2571]: E0508 00:40:16.142727 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d13fad7-9b52-4242-8c83-7d9a65d72e32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:16.142826 kubelet[2571]: E0508 00:40:16.142752 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d13fad7-9b52-4242-8c83-7d9a65d72e32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5d787db9-gdzhq" podUID="9d13fad7-9b52-4242-8c83-7d9a65d72e32" May 8 00:40:16.147824 containerd[1464]: time="2025-05-08T00:40:16.147782820Z" level=error msg="StopPodSandbox for \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\" failed" error="failed to destroy network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:16.148041 kubelet[2571]: E0508 00:40:16.148003 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:16.148077 kubelet[2571]: E0508 00:40:16.148048 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83"} May 8 00:40:16.148077 kubelet[2571]: E0508 00:40:16.148072 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8512146f-a4f2-4ad6-9a28-559c237b8730\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:16.148168 kubelet[2571]: E0508 00:40:16.148092 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8512146f-a4f2-4ad6-9a28-559c237b8730\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9584k" podUID="8512146f-a4f2-4ad6-9a28-559c237b8730" May 8 00:40:16.148785 containerd[1464]: time="2025-05-08T00:40:16.148751521Z" level=error msg="StopPodSandbox for \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\" failed" error="failed to destroy network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:16.148914 kubelet[2571]: E0508 00:40:16.148883 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:16.148948 kubelet[2571]: E0508 00:40:16.148911 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0"} May 8 00:40:16.148948 kubelet[2571]: E0508 00:40:16.148931 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e801bca-4ca7-4f8e-baa8-230995c21235\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:16.149010 kubelet[2571]: E0508 00:40:16.148948 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e801bca-4ca7-4f8e-baa8-230995c21235\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pq6w8" podUID="5e801bca-4ca7-4f8e-baa8-230995c21235" May 8 00:40:16.933924 systemd[1]: Created slice kubepods-besteffort-pod94d037e0_3318_4a96_bf33_490f8e3dd35d.slice - libcontainer container kubepods-besteffort-pod94d037e0_3318_4a96_bf33_490f8e3dd35d.slice. May 8 00:40:16.936428 containerd[1464]: time="2025-05-08T00:40:16.936382476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzqv5,Uid:94d037e0-3318-4a96-bf33-490f8e3dd35d,Namespace:calico-system,Attempt:0,}" May 8 00:40:17.082346 containerd[1464]: time="2025-05-08T00:40:17.082274532Z" level=error msg="Failed to destroy network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:17.082868 containerd[1464]: time="2025-05-08T00:40:17.082829154Z" level=error msg="encountered an error cleaning up failed sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:17.082948 containerd[1464]: time="2025-05-08T00:40:17.082903223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzqv5,Uid:94d037e0-3318-4a96-bf33-490f8e3dd35d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:17.083245 kubelet[2571]: E0508 00:40:17.083197 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:17.083336 kubelet[2571]: E0508 00:40:17.083264 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzqv5" May 8 00:40:17.083336 kubelet[2571]: E0508 00:40:17.083293 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzqv5" May 8 00:40:17.083406 kubelet[2571]: E0508 00:40:17.083337 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gzqv5_calico-system(94d037e0-3318-4a96-bf33-490f8e3dd35d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gzqv5_calico-system(94d037e0-3318-4a96-bf33-490f8e3dd35d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:17.085014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0-shm.mount: Deactivated successfully. May 8 00:40:17.107966 kubelet[2571]: I0508 00:40:17.107905 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:17.108480 containerd[1464]: time="2025-05-08T00:40:17.108436654Z" level=info msg="StopPodSandbox for \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\"" May 8 00:40:17.109019 containerd[1464]: time="2025-05-08T00:40:17.108634356Z" level=info msg="Ensure that sandbox a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0 in task-service has been cleanup successfully" May 8 00:40:17.136398 containerd[1464]: time="2025-05-08T00:40:17.136332514Z" level=error msg="StopPodSandbox for \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\" failed" error="failed to destroy network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:17.136687 kubelet[2571]: E0508 00:40:17.136633 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:17.137033 kubelet[2571]: E0508 00:40:17.136705 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0"} May 8 00:40:17.137033 kubelet[2571]: E0508 00:40:17.136740 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94d037e0-3318-4a96-bf33-490f8e3dd35d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:17.137033 kubelet[2571]: E0508 00:40:17.136766 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94d037e0-3318-4a96-bf33-490f8e3dd35d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzqv5" podUID="94d037e0-3318-4a96-bf33-490f8e3dd35d" May 8 00:40:19.302102 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:53344.service - OpenSSH per-connection server daemon (10.0.0.1:53344). May 8 00:40:19.703150 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 53344 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:19.705519 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:19.711176 systemd-logind[1453]: New session 11 of user core. May 8 00:40:19.718894 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:40:19.850013 sshd[3766]: pam_unix(sshd:session): session closed for user core May 8 00:40:19.862706 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:53344.service: Deactivated successfully. May 8 00:40:19.865258 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:40:19.867878 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. May 8 00:40:19.874748 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:53350.service - OpenSSH per-connection server daemon (10.0.0.1:53350). May 8 00:40:19.876550 systemd-logind[1453]: Removed session 11. May 8 00:40:19.905402 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 53350 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:19.908035 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:19.915103 systemd-logind[1453]: New session 12 of user core. May 8 00:40:19.919913 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:40:20.192850 sshd[3787]: pam_unix(sshd:session): session closed for user core May 8 00:40:20.202469 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:53350.service: Deactivated successfully. May 8 00:40:20.205076 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:40:20.207978 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. May 8 00:40:20.217940 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). May 8 00:40:20.219412 systemd-logind[1453]: Removed session 12. May 8 00:40:20.258096 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:20.260341 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:20.266389 systemd-logind[1453]: New session 13 of user core. May 8 00:40:20.270852 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:40:20.538390 sshd[3799]: pam_unix(sshd:session): session closed for user core May 8 00:40:20.543623 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. May 8 00:40:20.544210 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:53362.service: Deactivated successfully. May 8 00:40:20.546441 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:40:20.548173 systemd-logind[1453]: Removed session 13. May 8 00:40:21.887937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033966279.mount: Deactivated successfully. May 8 00:40:22.675568 kubelet[2571]: I0508 00:40:22.675491 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:22.676315 kubelet[2571]: E0508 00:40:22.676293 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:23.428524 containerd[1464]: time="2025-05-08T00:40:23.428406075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:23.430064 containerd[1464]: time="2025-05-08T00:40:23.429965073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:40:23.432590 containerd[1464]: time="2025-05-08T00:40:23.432509441Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:23.437395 containerd[1464]: time="2025-05-08T00:40:23.437147351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:23.441120 containerd[1464]: time="2025-05-08T00:40:23.440783570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.33313708s" May 8 00:40:23.441120 containerd[1464]: time="2025-05-08T00:40:23.440827834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:40:23.462682 containerd[1464]: time="2025-05-08T00:40:23.462618727Z" level=info msg="CreateContainer within sandbox \"fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:40:23.761160 containerd[1464]: time="2025-05-08T00:40:23.761067115Z" level=info msg="CreateContainer within sandbox \"fa45d0fd9e42f3f52785d181a9410e0335fb737895289eee15bb26379cdbec8c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0babc74d57f6c4f322cbf0b94e7a3746c0b0f453d219770e8b6a9691caba9494\"" May 8 00:40:23.761987 containerd[1464]: time="2025-05-08T00:40:23.761939263Z" level=info msg="StartContainer for \"0babc74d57f6c4f322cbf0b94e7a3746c0b0f453d219770e8b6a9691caba9494\"" May 8 00:40:23.842993 systemd[1]: Started cri-containerd-0babc74d57f6c4f322cbf0b94e7a3746c0b0f453d219770e8b6a9691caba9494.scope - libcontainer container 0babc74d57f6c4f322cbf0b94e7a3746c0b0f453d219770e8b6a9691caba9494. May 8 00:40:23.925955 containerd[1464]: time="2025-05-08T00:40:23.925896431Z" level=info msg="StartContainer for \"0babc74d57f6c4f322cbf0b94e7a3746c0b0f453d219770e8b6a9691caba9494\" returns successfully" May 8 00:40:23.961517 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:40:23.961672 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:40:24.429155 kubelet[2571]: E0508 00:40:24.429092 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:24.429818 kubelet[2571]: E0508 00:40:24.429264 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:25.555816 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:53376.service - OpenSSH per-connection server daemon (10.0.0.1:53376). May 8 00:40:25.594889 sshd[3887]: Accepted publickey for core from 10.0.0.1 port 53376 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:25.597097 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:25.602538 systemd-logind[1453]: New session 14 of user core. May 8 00:40:25.612812 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:40:25.733425 sshd[3887]: pam_unix(sshd:session): session closed for user core May 8 00:40:25.737518 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:53376.service: Deactivated successfully. May 8 00:40:25.739479 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:40:25.740292 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. May 8 00:40:25.741291 systemd-logind[1453]: Removed session 14. May 8 00:40:26.261695 kernel: bpftool[4032]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:40:26.491073 systemd-networkd[1405]: vxlan.calico: Link UP May 8 00:40:26.491083 systemd-networkd[1405]: vxlan.calico: Gained carrier May 8 00:40:26.929226 containerd[1464]: time="2025-05-08T00:40:26.929097750Z" level=info msg="StopPodSandbox for \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\"" May 8 00:40:27.212483 kubelet[2571]: I0508 00:40:27.212135 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6ln6s" podStartSLOduration=4.774453783 podStartE2EDuration="26.212109555s" podCreationTimestamp="2025-05-08 00:40:01 +0000 UTC" firstStartedPulling="2025-05-08 00:40:02.004098983 +0000 UTC m=+25.168361859" lastFinishedPulling="2025-05-08 00:40:23.441754755 +0000 UTC m=+46.606017631" observedRunningTime="2025-05-08 00:40:24.794406982 +0000 UTC m=+47.958669858" watchObservedRunningTime="2025-05-08 00:40:27.212109555 +0000 UTC m=+50.376372431" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.213 [INFO][4119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.213 [INFO][4119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" iface="eth0" netns="/var/run/netns/cni-49f90c38-b778-61f9-d268-86e1449b7e06" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.213 [INFO][4119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" iface="eth0" netns="/var/run/netns/cni-49f90c38-b778-61f9-d268-86e1449b7e06" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.214 [INFO][4119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" iface="eth0" netns="/var/run/netns/cni-49f90c38-b778-61f9-d268-86e1449b7e06" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.214 [INFO][4119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.214 [INFO][4119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.275 [INFO][4129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.276 [INFO][4129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.276 [INFO][4129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.296 [WARNING][4129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.296 [INFO][4129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.298 [INFO][4129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:27.304721 containerd[1464]: 2025-05-08 00:40:27.301 [INFO][4119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:27.305120 containerd[1464]: time="2025-05-08T00:40:27.304941007Z" level=info msg="TearDown network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\" successfully" May 8 00:40:27.305120 containerd[1464]: time="2025-05-08T00:40:27.304970682Z" level=info msg="StopPodSandbox for \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\" returns successfully" May 8 00:40:27.305776 containerd[1464]: time="2025-05-08T00:40:27.305753192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-8lqnx,Uid:fbce2436-d8e1-4ed5-8f00-79e6a1ac4517,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:27.307768 systemd[1]: run-netns-cni\x2d49f90c38\x2db778\x2d61f9\x2dd268\x2d86e1449b7e06.mount: Deactivated successfully. May 8 00:40:27.807859 systemd-networkd[1405]: vxlan.calico: Gained IPv6LL May 8 00:40:28.239980 systemd-networkd[1405]: cali85acbba915c: Link UP May 8 00:40:28.240450 systemd-networkd[1405]: cali85acbba915c: Gained carrier May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.106 [INFO][4136] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0 calico-apiserver-7f5d787db9- calico-apiserver fbce2436-d8e1-4ed5-8f00-79e6a1ac4517 936 0 2025-05-08 00:40:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5d787db9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5d787db9-8lqnx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85acbba915c [] []}} ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.107 [INFO][4136] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.142 [INFO][4150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" HandleID="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.149 [INFO][4150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" HandleID="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5d787db9-8lqnx", "timestamp":"2025-05-08 00:40:28.142045453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.150 [INFO][4150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.150 [INFO][4150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.150 [INFO][4150] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.167 [INFO][4150] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.172 [INFO][4150] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.177 [INFO][4150] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.178 [INFO][4150] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.180 [INFO][4150] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.180 [INFO][4150] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.181 [INFO][4150] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9 May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.213 [INFO][4150] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.233 [INFO][4150] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.233 [INFO][4150] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" host="localhost" May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.233 [INFO][4150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:28.284448 containerd[1464]: 2025-05-08 00:40:28.233 [INFO][4150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" HandleID="k8s-pod-network.7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:28.286470 containerd[1464]: 2025-05-08 00:40:28.236 [INFO][4136] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5d787db9-8lqnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85acbba915c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:28.286470 containerd[1464]: 2025-05-08 00:40:28.237 [INFO][4136] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:28.286470 containerd[1464]: 2025-05-08 00:40:28.237 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85acbba915c ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:28.286470 containerd[1464]: 2025-05-08 00:40:28.240 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:28.286470 containerd[1464]: 2025-05-08 00:40:28.240 [INFO][4136] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9", Pod:"calico-apiserver-7f5d787db9-8lqnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85acbba915c", MAC:"6a:e4:5b:09:0e:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:28.286470 containerd[1464]: 2025-05-08 00:40:28.281 [INFO][4136] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-8lqnx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:28.341430 containerd[1464]: time="2025-05-08T00:40:28.341288681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:28.341430 containerd[1464]: time="2025-05-08T00:40:28.341391144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:28.341430 containerd[1464]: time="2025-05-08T00:40:28.341405711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:28.341610 containerd[1464]: time="2025-05-08T00:40:28.341492784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:28.362813 systemd[1]: Started cri-containerd-7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9.scope - libcontainer container 7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9. May 8 00:40:28.376419 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:28.402272 containerd[1464]: time="2025-05-08T00:40:28.402212330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-8lqnx,Uid:fbce2436-d8e1-4ed5-8f00-79e6a1ac4517,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9\"" May 8 00:40:28.403955 containerd[1464]: time="2025-05-08T00:40:28.403921768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:28.928991 containerd[1464]: time="2025-05-08T00:40:28.928930990Z" level=info msg="StopPodSandbox for \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\"" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.972 [INFO][4232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.972 [INFO][4232] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" iface="eth0" netns="/var/run/netns/cni-b78b736e-763a-f290-2204-40131b194a30" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.972 [INFO][4232] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" iface="eth0" netns="/var/run/netns/cni-b78b736e-763a-f290-2204-40131b194a30" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.972 [INFO][4232] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" iface="eth0" netns="/var/run/netns/cni-b78b736e-763a-f290-2204-40131b194a30" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.973 [INFO][4232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.973 [INFO][4232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.996 [INFO][4241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.996 [INFO][4241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:28.996 [INFO][4241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:29.000 [WARNING][4241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:29.001 [INFO][4241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:29.002 [INFO][4241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:29.006648 containerd[1464]: 2025-05-08 00:40:29.004 [INFO][4232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:29.007257 containerd[1464]: time="2025-05-08T00:40:29.006928742Z" level=info msg="TearDown network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\" successfully" May 8 00:40:29.007257 containerd[1464]: time="2025-05-08T00:40:29.006963777Z" level=info msg="StopPodSandbox for \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\" returns successfully" May 8 00:40:29.008078 containerd[1464]: time="2025-05-08T00:40:29.008013738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzqv5,Uid:94d037e0-3318-4a96-bf33-490f8e3dd35d,Namespace:calico-system,Attempt:1,}" May 8 00:40:29.010295 systemd[1]: run-netns-cni\x2db78b736e\x2d763a\x2df290\x2d2204\x2d40131b194a30.mount: Deactivated successfully. May 8 00:40:29.117218 systemd-networkd[1405]: calif8fd6e90ce7: Link UP May 8 00:40:29.118421 systemd-networkd[1405]: calif8fd6e90ce7: Gained carrier May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.051 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gzqv5-eth0 csi-node-driver- calico-system 94d037e0-3318-4a96-bf33-490f8e3dd35d 944 0 2025-05-08 00:40:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gzqv5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif8fd6e90ce7 [] []}} ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.051 [INFO][4248] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.078 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" HandleID="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.085 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" HandleID="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gzqv5", "timestamp":"2025-05-08 00:40:29.078236967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.085 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.085 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.085 [INFO][4263] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.087 [INFO][4263] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.091 [INFO][4263] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.094 [INFO][4263] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.096 [INFO][4263] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.097 [INFO][4263] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.097 [INFO][4263] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.099 [INFO][4263] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77 May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.105 [INFO][4263] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.112 [INFO][4263] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.112 [INFO][4263] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" host="localhost" May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.112 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:29.132126 containerd[1464]: 2025-05-08 00:40:29.112 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" HandleID="k8s-pod-network.b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.132874 containerd[1464]: 2025-05-08 00:40:29.115 [INFO][4248] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzqv5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d037e0-3318-4a96-bf33-490f8e3dd35d", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gzqv5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8fd6e90ce7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:29.132874 containerd[1464]: 2025-05-08 00:40:29.115 [INFO][4248] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.132874 containerd[1464]: 2025-05-08 00:40:29.115 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8fd6e90ce7 ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.132874 containerd[1464]: 2025-05-08 00:40:29.117 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.132874 containerd[1464]: 2025-05-08 00:40:29.118 [INFO][4248] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzqv5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d037e0-3318-4a96-bf33-490f8e3dd35d", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77", Pod:"csi-node-driver-gzqv5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8fd6e90ce7", MAC:"b2:e2:88:b3:20:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:29.132874 containerd[1464]: 2025-05-08 00:40:29.129 [INFO][4248] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77" Namespace="calico-system" Pod="csi-node-driver-gzqv5" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:29.154903 containerd[1464]: time="2025-05-08T00:40:29.154777677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:29.154903 containerd[1464]: time="2025-05-08T00:40:29.154856484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:29.154903 containerd[1464]: time="2025-05-08T00:40:29.154869068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:29.155151 containerd[1464]: time="2025-05-08T00:40:29.154970680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:29.181873 systemd[1]: Started cri-containerd-b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77.scope - libcontainer container b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77. May 8 00:40:29.193802 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:29.204619 containerd[1464]: time="2025-05-08T00:40:29.204525007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzqv5,Uid:94d037e0-3318-4a96-bf33-490f8e3dd35d,Namespace:calico-system,Attempt:1,} returns sandbox id \"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77\"" May 8 00:40:29.345514 systemd-networkd[1405]: cali85acbba915c: Gained IPv6LL May 8 00:40:29.929075 containerd[1464]: time="2025-05-08T00:40:29.929022361Z" level=info msg="StopPodSandbox for \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\"" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.972 [INFO][4341] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.972 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" iface="eth0" netns="/var/run/netns/cni-b46cde2d-4ef4-a54b-6a05-6bf8c4fcf6b6" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.973 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" iface="eth0" netns="/var/run/netns/cni-b46cde2d-4ef4-a54b-6a05-6bf8c4fcf6b6" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.973 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" iface="eth0" netns="/var/run/netns/cni-b46cde2d-4ef4-a54b-6a05-6bf8c4fcf6b6" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.973 [INFO][4341] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.973 [INFO][4341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.993 [INFO][4349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.993 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:29.994 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:30.000 [WARNING][4349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:30.000 [INFO][4349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:30.002 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:30.007308 containerd[1464]: 2025-05-08 00:40:30.004 [INFO][4341] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:30.007877 containerd[1464]: time="2025-05-08T00:40:30.007496278Z" level=info msg="TearDown network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\" successfully" May 8 00:40:30.007877 containerd[1464]: time="2025-05-08T00:40:30.007521656Z" level=info msg="StopPodSandbox for \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\" returns successfully" May 8 00:40:30.008019 kubelet[2571]: E0508 00:40:30.007964 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:30.009452 containerd[1464]: time="2025-05-08T00:40:30.009098416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9584k,Uid:8512146f-a4f2-4ad6-9a28-559c237b8730,Namespace:kube-system,Attempt:1,}" May 8 00:40:30.010277 systemd[1]: run-netns-cni\x2db46cde2d\x2d4ef4\x2da54b\x2d6a05\x2d6bf8c4fcf6b6.mount: Deactivated successfully. May 8 00:40:30.133111 systemd-networkd[1405]: cali204ab6c5f6b: Link UP May 8 00:40:30.134316 systemd-networkd[1405]: cali204ab6c5f6b: Gained carrier May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.069 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--9584k-eth0 coredns-7db6d8ff4d- kube-system 8512146f-a4f2-4ad6-9a28-559c237b8730 952 0 2025-05-08 00:39:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-9584k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali204ab6c5f6b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.069 [INFO][4359] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.095 [INFO][4372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" HandleID="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.102 [INFO][4372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" HandleID="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000583b40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-9584k", "timestamp":"2025-05-08 00:40:30.095630394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.102 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.102 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.102 [INFO][4372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.103 [INFO][4372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.107 [INFO][4372] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.112 [INFO][4372] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.113 [INFO][4372] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.116 [INFO][4372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.116 [INFO][4372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.117 [INFO][4372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340 May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.120 [INFO][4372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.127 [INFO][4372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.127 [INFO][4372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" host="localhost" May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.127 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:30.147912 containerd[1464]: 2025-05-08 00:40:30.127 [INFO][4372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" HandleID="k8s-pod-network.2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.148551 containerd[1464]: 2025-05-08 00:40:30.130 [INFO][4359] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9584k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8512146f-a4f2-4ad6-9a28-559c237b8730", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-9584k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali204ab6c5f6b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:30.148551 containerd[1464]: 2025-05-08 00:40:30.130 [INFO][4359] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.148551 containerd[1464]: 2025-05-08 00:40:30.130 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali204ab6c5f6b ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.148551 containerd[1464]: 2025-05-08 00:40:30.134 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.148551 containerd[1464]: 2025-05-08 00:40:30.134 [INFO][4359] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9584k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8512146f-a4f2-4ad6-9a28-559c237b8730", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340", Pod:"coredns-7db6d8ff4d-9584k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali204ab6c5f6b", MAC:"72:9f:b1:e3:11:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:30.148551 containerd[1464]: 2025-05-08 00:40:30.145 [INFO][4359] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9584k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:30.168609 containerd[1464]: time="2025-05-08T00:40:30.168499853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:30.168609 containerd[1464]: time="2025-05-08T00:40:30.168561299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:30.168609 containerd[1464]: time="2025-05-08T00:40:30.168591005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:30.168868 containerd[1464]: time="2025-05-08T00:40:30.168744823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:30.175829 systemd-networkd[1405]: calif8fd6e90ce7: Gained IPv6LL May 8 00:40:30.194820 systemd[1]: Started cri-containerd-2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340.scope - libcontainer container 2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340. May 8 00:40:30.207430 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:30.231039 containerd[1464]: time="2025-05-08T00:40:30.230988336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9584k,Uid:8512146f-a4f2-4ad6-9a28-559c237b8730,Namespace:kube-system,Attempt:1,} returns sandbox id \"2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340\"" May 8 00:40:30.232415 kubelet[2571]: E0508 00:40:30.232356 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:30.238340 containerd[1464]: time="2025-05-08T00:40:30.238301995Z" level=info msg="CreateContainer within sandbox \"2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:30.260396 containerd[1464]: time="2025-05-08T00:40:30.260334886Z" level=info msg="CreateContainer within sandbox \"2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f9542ee009539d3db68318123a0860be844768bb49f90062cf99037190823b7\"" May 8 00:40:30.261019 containerd[1464]: time="2025-05-08T00:40:30.260983253Z" level=info msg="StartContainer for \"3f9542ee009539d3db68318123a0860be844768bb49f90062cf99037190823b7\"" May 8 00:40:30.300959 systemd[1]: Started cri-containerd-3f9542ee009539d3db68318123a0860be844768bb49f90062cf99037190823b7.scope - libcontainer container 3f9542ee009539d3db68318123a0860be844768bb49f90062cf99037190823b7. May 8 00:40:30.342445 containerd[1464]: time="2025-05-08T00:40:30.342390169Z" level=info msg="StartContainer for \"3f9542ee009539d3db68318123a0860be844768bb49f90062cf99037190823b7\" returns successfully" May 8 00:40:30.449414 kubelet[2571]: E0508 00:40:30.449181 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:30.464140 kubelet[2571]: I0508 00:40:30.464054 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9584k" podStartSLOduration=38.464030922 podStartE2EDuration="38.464030922s" podCreationTimestamp="2025-05-08 00:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:30.463669183 +0000 UTC m=+53.627932089" watchObservedRunningTime="2025-05-08 00:40:30.464030922 +0000 UTC m=+53.628293798" May 8 00:40:30.749049 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:54556.service - OpenSSH per-connection server daemon (10.0.0.1:54556). May 8 00:40:30.812325 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 54556 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:30.814532 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:30.820383 systemd-logind[1453]: New session 15 of user core. May 8 00:40:30.826874 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:40:30.929233 containerd[1464]: time="2025-05-08T00:40:30.929163906Z" level=info msg="StopPodSandbox for \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\"" May 8 00:40:30.930572 containerd[1464]: time="2025-05-08T00:40:30.930544017Z" level=info msg="StopPodSandbox for \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\"" May 8 00:40:30.931345 containerd[1464]: time="2025-05-08T00:40:30.931221879Z" level=info msg="StopPodSandbox for \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\"" May 8 00:40:31.005130 sshd[4478]: pam_unix(sshd:session): session closed for user core May 8 00:40:31.013736 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:54556.service: Deactivated successfully. May 8 00:40:31.020134 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:40:31.021925 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. May 8 00:40:31.023942 systemd-logind[1453]: Removed session 15. May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.004 [INFO][4529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.004 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" iface="eth0" netns="/var/run/netns/cni-ff5eff59-f55b-42dd-b620-2c2290ae73ea" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.005 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" iface="eth0" netns="/var/run/netns/cni-ff5eff59-f55b-42dd-b620-2c2290ae73ea" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.005 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" iface="eth0" netns="/var/run/netns/cni-ff5eff59-f55b-42dd-b620-2c2290ae73ea" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.005 [INFO][4529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.005 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.050 [INFO][4562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.050 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.053 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.061 [WARNING][4562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.063 [INFO][4562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.065 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:31.074265 containerd[1464]: 2025-05-08 00:40:31.071 [INFO][4529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:31.077681 containerd[1464]: time="2025-05-08T00:40:31.075976525Z" level=info msg="TearDown network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\" successfully" May 8 00:40:31.077681 containerd[1464]: time="2025-05-08T00:40:31.076028312Z" level=info msg="StopPodSandbox for \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\" returns successfully" May 8 00:40:31.079113 containerd[1464]: time="2025-05-08T00:40:31.078733230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb4ddbd59-kfxqq,Uid:6f09368a-85bc-4ff8-a22a-5897ae61119a,Namespace:calico-system,Attempt:1,}" May 8 00:40:31.080855 systemd[1]: run-netns-cni\x2dff5eff59\x2df55b\x2d42dd\x2db620\x2d2c2290ae73ea.mount: Deactivated successfully. May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.031 [INFO][4544] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.031 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" iface="eth0" netns="/var/run/netns/cni-3efdc56d-020c-2c65-f4d6-89e7bfc275f8" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.031 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" iface="eth0" netns="/var/run/netns/cni-3efdc56d-020c-2c65-f4d6-89e7bfc275f8" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.032 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" iface="eth0" netns="/var/run/netns/cni-3efdc56d-020c-2c65-f4d6-89e7bfc275f8" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.032 [INFO][4544] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.032 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.065 [INFO][4571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.065 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.065 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.072 [WARNING][4571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.072 [INFO][4571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.074 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:31.083534 containerd[1464]: 2025-05-08 00:40:31.077 [INFO][4544] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:31.085305 containerd[1464]: time="2025-05-08T00:40:31.083797746Z" level=info msg="TearDown network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\" successfully" May 8 00:40:31.085305 containerd[1464]: time="2025-05-08T00:40:31.083831690Z" level=info msg="StopPodSandbox for \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\" returns successfully" May 8 00:40:31.088412 containerd[1464]: time="2025-05-08T00:40:31.086952738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-gdzhq,Uid:9d13fad7-9b52-4242-8c83-7d9a65d72e32,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:31.087633 systemd[1]: run-netns-cni\x2d3efdc56d\x2d020c\x2d2c65\x2df4d6\x2d89e7bfc275f8.mount: Deactivated successfully. May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.037 [INFO][4545] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.038 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" iface="eth0" netns="/var/run/netns/cni-346db46f-aed3-5200-de16-849a857bfc4e" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.039 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" iface="eth0" netns="/var/run/netns/cni-346db46f-aed3-5200-de16-849a857bfc4e" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.039 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" iface="eth0" netns="/var/run/netns/cni-346db46f-aed3-5200-de16-849a857bfc4e" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.039 [INFO][4545] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.039 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.078 [INFO][4577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.079 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.079 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.084 [WARNING][4577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.084 [INFO][4577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.088 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:31.093564 containerd[1464]: 2025-05-08 00:40:31.091 [INFO][4545] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:31.096418 containerd[1464]: time="2025-05-08T00:40:31.096376628Z" level=info msg="TearDown network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\" successfully" May 8 00:40:31.096418 containerd[1464]: time="2025-05-08T00:40:31.096414399Z" level=info msg="StopPodSandbox for \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\" returns successfully" May 8 00:40:31.096932 kubelet[2571]: E0508 00:40:31.096905 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:31.097385 containerd[1464]: time="2025-05-08T00:40:31.097290543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pq6w8,Uid:5e801bca-4ca7-4f8e-baa8-230995c21235,Namespace:kube-system,Attempt:1,}" May 8 00:40:31.097548 systemd[1]: run-netns-cni\x2d346db46f\x2daed3\x2d5200\x2dde16\x2d849a857bfc4e.mount: Deactivated successfully. May 8 00:40:31.199923 systemd-networkd[1405]: cali204ab6c5f6b: Gained IPv6LL May 8 00:40:31.451279 kubelet[2571]: E0508 00:40:31.451124 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:31.861825 systemd-networkd[1405]: cali353f4bde467: Link UP May 8 00:40:31.864477 systemd-networkd[1405]: cali353f4bde467: Gained carrier May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.730 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0 calico-kube-controllers-7bb4ddbd59- calico-system 6f09368a-85bc-4ff8-a22a-5897ae61119a 970 0 2025-05-08 00:40:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bb4ddbd59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bb4ddbd59-kfxqq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali353f4bde467 [] []}} ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.731 [INFO][4600] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.771 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" HandleID="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.781 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" HandleID="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bb4ddbd59-kfxqq", "timestamp":"2025-05-08 00:40:31.771465718 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.781 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.781 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.781 [INFO][4643] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.787 [INFO][4643] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.798 [INFO][4643] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.819 [INFO][4643] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.824 [INFO][4643] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.831 [INFO][4643] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.831 [INFO][4643] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.833 [INFO][4643] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.838 [INFO][4643] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.847 [INFO][4643] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.847 [INFO][4643] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" host="localhost" May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.847 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:31.888632 containerd[1464]: 2025-05-08 00:40:31.847 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" HandleID="k8s-pod-network.63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.889530 containerd[1464]: 2025-05-08 00:40:31.851 [INFO][4600] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0", GenerateName:"calico-kube-controllers-7bb4ddbd59-", Namespace:"calico-system", SelfLink:"", UID:"6f09368a-85bc-4ff8-a22a-5897ae61119a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb4ddbd59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bb4ddbd59-kfxqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali353f4bde467", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:31.889530 containerd[1464]: 2025-05-08 00:40:31.854 [INFO][4600] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.889530 containerd[1464]: 2025-05-08 00:40:31.855 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali353f4bde467 ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.889530 containerd[1464]: 2025-05-08 00:40:31.867 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.889530 containerd[1464]: 2025-05-08 00:40:31.867 [INFO][4600] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0", GenerateName:"calico-kube-controllers-7bb4ddbd59-", Namespace:"calico-system", SelfLink:"", UID:"6f09368a-85bc-4ff8-a22a-5897ae61119a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb4ddbd59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d", Pod:"calico-kube-controllers-7bb4ddbd59-kfxqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali353f4bde467", MAC:"de:eb:a3:c3:3d:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:31.889530 containerd[1464]: 2025-05-08 00:40:31.882 [INFO][4600] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d" Namespace="calico-system" Pod="calico-kube-controllers-7bb4ddbd59-kfxqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:31.910883 systemd-networkd[1405]: calif1e5a4f5f82: Link UP May 8 00:40:31.911491 systemd-networkd[1405]: calif1e5a4f5f82: Gained carrier May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.739 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0 coredns-7db6d8ff4d- kube-system 5e801bca-4ca7-4f8e-baa8-230995c21235 972 0 2025-05-08 00:39:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-pq6w8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1e5a4f5f82 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.739 [INFO][4607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.799 [INFO][4649] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" HandleID="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.833 [INFO][4649] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" HandleID="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-pq6w8", "timestamp":"2025-05-08 00:40:31.799764047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.833 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.848 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.848 [INFO][4649] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.853 [INFO][4649] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.858 [INFO][4649] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.864 [INFO][4649] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.866 [INFO][4649] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.871 [INFO][4649] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.871 [INFO][4649] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.883 [INFO][4649] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.888 [INFO][4649] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.894 [INFO][4649] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.895 [INFO][4649] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" host="localhost" May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.895 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:31.929701 containerd[1464]: 2025-05-08 00:40:31.895 [INFO][4649] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" HandleID="k8s-pod-network.a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.931494 containerd[1464]: 2025-05-08 00:40:31.905 [INFO][4607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5e801bca-4ca7-4f8e-baa8-230995c21235", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-pq6w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1e5a4f5f82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:31.931494 containerd[1464]: 2025-05-08 00:40:31.905 [INFO][4607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.931494 containerd[1464]: 2025-05-08 00:40:31.905 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1e5a4f5f82 ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.931494 containerd[1464]: 2025-05-08 00:40:31.911 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.931494 containerd[1464]: 2025-05-08 00:40:31.911 [INFO][4607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5e801bca-4ca7-4f8e-baa8-230995c21235", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b", Pod:"coredns-7db6d8ff4d-pq6w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1e5a4f5f82", MAC:"26:20:a4:55:fb:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:31.931494 containerd[1464]: 2025-05-08 00:40:31.923 [INFO][4607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pq6w8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:31.952877 containerd[1464]: time="2025-05-08T00:40:31.952511789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:31.953449 containerd[1464]: time="2025-05-08T00:40:31.952599936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:31.953449 containerd[1464]: time="2025-05-08T00:40:31.952649198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:31.954788 containerd[1464]: time="2025-05-08T00:40:31.954465948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:31.981499 systemd-networkd[1405]: calia9fae3c1e84: Link UP May 8 00:40:31.983600 containerd[1464]: time="2025-05-08T00:40:31.982965906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:31.983600 containerd[1464]: time="2025-05-08T00:40:31.983030196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:31.983600 containerd[1464]: time="2025-05-08T00:40:31.983062707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:31.983600 containerd[1464]: time="2025-05-08T00:40:31.983223328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:31.986861 systemd-networkd[1405]: calia9fae3c1e84: Gained carrier May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.773 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0 calico-apiserver-7f5d787db9- calico-apiserver 9d13fad7-9b52-4242-8c83-7d9a65d72e32 971 0 2025-05-08 00:40:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5d787db9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5d787db9-gdzhq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia9fae3c1e84 [] []}} ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.775 [INFO][4618] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.826 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" HandleID="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.837 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" HandleID="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042d480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5d787db9-gdzhq", "timestamp":"2025-05-08 00:40:31.826778556 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.837 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.895 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.895 [INFO][4660] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.902 [INFO][4660] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.908 [INFO][4660] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.920 [INFO][4660] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.923 [INFO][4660] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.930 [INFO][4660] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.930 [INFO][4660] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.933 [INFO][4660] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21 May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.940 [INFO][4660] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.950 [INFO][4660] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.950 [INFO][4660] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" host="localhost" May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.950 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:32.007394 containerd[1464]: 2025-05-08 00:40:31.950 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" HandleID="k8s-pod-network.d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:32.008396 containerd[1464]: 2025-05-08 00:40:31.971 [INFO][4618] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d13fad7-9b52-4242-8c83-7d9a65d72e32", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5d787db9-gdzhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9fae3c1e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:32.008396 containerd[1464]: 2025-05-08 00:40:31.971 [INFO][4618] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:32.008396 containerd[1464]: 2025-05-08 00:40:31.971 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9fae3c1e84 ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:32.008396 containerd[1464]: 2025-05-08 00:40:31.987 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:32.008396 containerd[1464]: 2025-05-08 00:40:31.988 [INFO][4618] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d13fad7-9b52-4242-8c83-7d9a65d72e32", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21", Pod:"calico-apiserver-7f5d787db9-gdzhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9fae3c1e84", MAC:"6e:34:86:1e:e1:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:32.008396 containerd[1464]: 2025-05-08 00:40:32.001 [INFO][4618] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21" Namespace="calico-apiserver" Pod="calico-apiserver-7f5d787db9-gdzhq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:32.012111 systemd[1]: Started cri-containerd-63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d.scope - libcontainer container 63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d. May 8 00:40:32.018482 systemd[1]: Started cri-containerd-a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b.scope - libcontainer container a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b. May 8 00:40:32.036583 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:32.040321 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:32.048014 containerd[1464]: time="2025-05-08T00:40:32.047811333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:32.048014 containerd[1464]: time="2025-05-08T00:40:32.047866356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:32.048014 containerd[1464]: time="2025-05-08T00:40:32.047879731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:32.048014 containerd[1464]: time="2025-05-08T00:40:32.047952037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:32.074326 containerd[1464]: time="2025-05-08T00:40:32.074190597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb4ddbd59-kfxqq,Uid:6f09368a-85bc-4ff8-a22a-5897ae61119a,Namespace:calico-system,Attempt:1,} returns sandbox id \"63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d\"" May 8 00:40:32.077213 systemd[1]: Started cri-containerd-d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21.scope - libcontainer container d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21. May 8 00:40:32.080575 containerd[1464]: time="2025-05-08T00:40:32.080391436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pq6w8,Uid:5e801bca-4ca7-4f8e-baa8-230995c21235,Namespace:kube-system,Attempt:1,} returns sandbox id \"a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b\"" May 8 00:40:32.082302 kubelet[2571]: E0508 00:40:32.082276 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:32.090446 containerd[1464]: time="2025-05-08T00:40:32.090321845Z" level=info msg="CreateContainer within sandbox \"a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:32.097551 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:32.117734 containerd[1464]: time="2025-05-08T00:40:32.114412113Z" level=info msg="CreateContainer within sandbox \"a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5692e737fd704378e93b7d522bb46d5ccff08c5ad4caf90efffa053cc3534c5a\"" May 8 00:40:32.117734 containerd[1464]: time="2025-05-08T00:40:32.115934981Z" level=info msg="StartContainer for \"5692e737fd704378e93b7d522bb46d5ccff08c5ad4caf90efffa053cc3534c5a\"" May 8 00:40:32.130153 containerd[1464]: time="2025-05-08T00:40:32.130048553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5d787db9-gdzhq,Uid:9d13fad7-9b52-4242-8c83-7d9a65d72e32,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21\"" May 8 00:40:32.158035 systemd[1]: Started cri-containerd-5692e737fd704378e93b7d522bb46d5ccff08c5ad4caf90efffa053cc3534c5a.scope - libcontainer container 5692e737fd704378e93b7d522bb46d5ccff08c5ad4caf90efffa053cc3534c5a. May 8 00:40:32.191910 containerd[1464]: time="2025-05-08T00:40:32.191852960Z" level=info msg="StartContainer for \"5692e737fd704378e93b7d522bb46d5ccff08c5ad4caf90efffa053cc3534c5a\" returns successfully" May 8 00:40:32.456159 kubelet[2571]: E0508 00:40:32.456028 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:32.460995 kubelet[2571]: E0508 00:40:32.460912 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:32.471385 kubelet[2571]: I0508 00:40:32.470828 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pq6w8" podStartSLOduration=40.470802482 podStartE2EDuration="40.470802482s" podCreationTimestamp="2025-05-08 00:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:32.468329571 +0000 UTC m=+55.632592447" watchObservedRunningTime="2025-05-08 00:40:32.470802482 +0000 UTC m=+55.635065358" May 8 00:40:32.487278 containerd[1464]: time="2025-05-08T00:40:32.487099823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:32.489349 containerd[1464]: time="2025-05-08T00:40:32.488392679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:40:32.490797 containerd[1464]: time="2025-05-08T00:40:32.490713856Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:32.494428 containerd[1464]: time="2025-05-08T00:40:32.494370610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:32.495289 containerd[1464]: time="2025-05-08T00:40:32.495247656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.091292695s" May 8 00:40:32.495289 containerd[1464]: time="2025-05-08T00:40:32.495285537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:32.496942 containerd[1464]: time="2025-05-08T00:40:32.496827462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:40:32.497894 containerd[1464]: time="2025-05-08T00:40:32.497861333Z" level=info msg="CreateContainer within sandbox \"7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:32.800394 containerd[1464]: time="2025-05-08T00:40:32.800329199Z" level=info msg="CreateContainer within sandbox \"7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d345dc62d7428215c1befbb71a0340dcc1d4558f36bcda6831003437b5221349\"" May 8 00:40:32.801012 containerd[1464]: time="2025-05-08T00:40:32.800964541Z" level=info msg="StartContainer for \"d345dc62d7428215c1befbb71a0340dcc1d4558f36bcda6831003437b5221349\"" May 8 00:40:32.841005 systemd[1]: Started cri-containerd-d345dc62d7428215c1befbb71a0340dcc1d4558f36bcda6831003437b5221349.scope - libcontainer container d345dc62d7428215c1befbb71a0340dcc1d4558f36bcda6831003437b5221349. May 8 00:40:32.884560 containerd[1464]: time="2025-05-08T00:40:32.884486734Z" level=info msg="StartContainer for \"d345dc62d7428215c1befbb71a0340dcc1d4558f36bcda6831003437b5221349\" returns successfully" May 8 00:40:33.247852 systemd-networkd[1405]: calia9fae3c1e84: Gained IPv6LL May 8 00:40:33.311817 systemd-networkd[1405]: calif1e5a4f5f82: Gained IPv6LL May 8 00:40:33.466403 kubelet[2571]: E0508 00:40:33.466362 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:33.759848 systemd-networkd[1405]: cali353f4bde467: Gained IPv6LL May 8 00:40:34.371389 containerd[1464]: time="2025-05-08T00:40:34.371321414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:34.372417 containerd[1464]: time="2025-05-08T00:40:34.372372236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:40:34.373738 containerd[1464]: time="2025-05-08T00:40:34.373708113Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:34.382280 containerd[1464]: time="2025-05-08T00:40:34.382244936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:34.382856 containerd[1464]: time="2025-05-08T00:40:34.382827058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.885965052s" May 8 00:40:34.382903 containerd[1464]: time="2025-05-08T00:40:34.382858728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:40:34.384355 containerd[1464]: time="2025-05-08T00:40:34.384298520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:40:34.384921 containerd[1464]: time="2025-05-08T00:40:34.384891383Z" level=info msg="CreateContainer within sandbox \"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:40:34.407780 containerd[1464]: time="2025-05-08T00:40:34.407733484Z" level=info msg="CreateContainer within sandbox \"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c725a9859d744e1af4d273ea273eafd1ac2d0d2b957715cfb2897301b88724b1\"" May 8 00:40:34.408566 containerd[1464]: time="2025-05-08T00:40:34.408530249Z" level=info msg="StartContainer for \"c725a9859d744e1af4d273ea273eafd1ac2d0d2b957715cfb2897301b88724b1\"" May 8 00:40:34.443867 systemd[1]: Started cri-containerd-c725a9859d744e1af4d273ea273eafd1ac2d0d2b957715cfb2897301b88724b1.scope - libcontainer container c725a9859d744e1af4d273ea273eafd1ac2d0d2b957715cfb2897301b88724b1. May 8 00:40:34.471088 kubelet[2571]: E0508 00:40:34.470535 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:34.481296 containerd[1464]: time="2025-05-08T00:40:34.481226620Z" level=info msg="StartContainer for \"c725a9859d744e1af4d273ea273eafd1ac2d0d2b957715cfb2897301b88724b1\" returns successfully" May 8 00:40:34.491861 kubelet[2571]: I0508 00:40:34.491316 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f5d787db9-8lqnx" podStartSLOduration=29.398581174 podStartE2EDuration="33.49129138s" podCreationTimestamp="2025-05-08 00:40:01 +0000 UTC" firstStartedPulling="2025-05-08 00:40:28.403422572 +0000 UTC m=+51.567685448" lastFinishedPulling="2025-05-08 00:40:32.496132778 +0000 UTC m=+55.660395654" observedRunningTime="2025-05-08 00:40:33.508072292 +0000 UTC m=+56.672335198" watchObservedRunningTime="2025-05-08 00:40:34.49129138 +0000 UTC m=+57.655554256" May 8 00:40:36.017188 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:54572.service - OpenSSH per-connection server daemon (10.0.0.1:54572). May 8 00:40:36.059352 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 54572 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:36.061324 sshd[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:36.066356 systemd-logind[1453]: New session 16 of user core. May 8 00:40:36.074078 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:40:36.225566 sshd[4964]: pam_unix(sshd:session): session closed for user core May 8 00:40:36.230879 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:54572.service: Deactivated successfully. May 8 00:40:36.233595 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:40:36.235004 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. May 8 00:40:36.236647 systemd-logind[1453]: Removed session 16. May 8 00:40:36.914396 containerd[1464]: time="2025-05-08T00:40:36.914335709Z" level=info msg="StopPodSandbox for \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\"" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.956 [WARNING][5003] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9", Pod:"calico-apiserver-7f5d787db9-8lqnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85acbba915c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.956 [INFO][5003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.956 [INFO][5003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" iface="eth0" netns="" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.956 [INFO][5003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.956 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.988 [INFO][5013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.989 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.989 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.994 [WARNING][5013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.994 [INFO][5013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.995 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.001359 containerd[1464]: 2025-05-08 00:40:36.998 [INFO][5003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.001984 containerd[1464]: time="2025-05-08T00:40:37.001411095Z" level=info msg="TearDown network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\" successfully" May 8 00:40:37.001984 containerd[1464]: time="2025-05-08T00:40:37.001442554Z" level=info msg="StopPodSandbox for \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\" returns successfully" May 8 00:40:37.008650 containerd[1464]: time="2025-05-08T00:40:37.008594637Z" level=info msg="RemovePodSandbox for \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\"" May 8 00:40:37.011785 containerd[1464]: time="2025-05-08T00:40:37.011752383Z" level=info msg="Forcibly stopping sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\"" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.052 [WARNING][5035] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbce2436-d8e1-4ed5-8f00-79e6a1ac4517", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b42086e45417877d612d2e1e05085eaa7255e8818e0ee602200b5d87487e6f9", Pod:"calico-apiserver-7f5d787db9-8lqnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85acbba915c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.052 [INFO][5035] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.052 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" iface="eth0" netns="" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.052 [INFO][5035] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.052 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.075 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.075 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.075 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.082 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.082 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" HandleID="k8s-pod-network.aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" Workload="localhost-k8s-calico--apiserver--7f5d787db9--8lqnx-eth0" May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.083 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.088801 containerd[1464]: 2025-05-08 00:40:37.085 [INFO][5035] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b" May 8 00:40:37.089378 containerd[1464]: time="2025-05-08T00:40:37.088818979Z" level=info msg="TearDown network for sandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\" successfully" May 8 00:40:37.376631 containerd[1464]: time="2025-05-08T00:40:37.376121768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:37.379621 containerd[1464]: time="2025-05-08T00:40:37.379551394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:40:37.381161 containerd[1464]: time="2025-05-08T00:40:37.380872243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:40:37.381161 containerd[1464]: time="2025-05-08T00:40:37.380979034Z" level=info msg="RemovePodSandbox \"aac2c555a223b9f1937f1db0dcdfcec94da31705b3d0cceddd3b9b25369d5c1b\" returns successfully" May 8 00:40:37.382048 containerd[1464]: time="2025-05-08T00:40:37.381750811Z" level=info msg="StopPodSandbox for \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\"" May 8 00:40:37.382048 containerd[1464]: time="2025-05-08T00:40:37.381987647Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:37.385797 containerd[1464]: time="2025-05-08T00:40:37.385730270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:37.386457 containerd[1464]: time="2025-05-08T00:40:37.386418261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.002069146s" May 8 00:40:37.386527 containerd[1464]: time="2025-05-08T00:40:37.386463496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:40:37.387907 containerd[1464]: time="2025-05-08T00:40:37.387883140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:37.401719 containerd[1464]: time="2025-05-08T00:40:37.398229346Z" level=info msg="CreateContainer within sandbox \"63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:40:37.427355 containerd[1464]: time="2025-05-08T00:40:37.427301414Z" level=info msg="CreateContainer within sandbox \"63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9900309315f202cf2cc87073d4f155e1de8ce7b19c64f69076757a4ef4eb97df\"" May 8 00:40:37.428185 containerd[1464]: time="2025-05-08T00:40:37.428143244Z" level=info msg="StartContainer for \"9900309315f202cf2cc87073d4f155e1de8ce7b19c64f69076757a4ef4eb97df\"" May 8 00:40:37.464066 systemd[1]: Started cri-containerd-9900309315f202cf2cc87073d4f155e1de8ce7b19c64f69076757a4ef4eb97df.scope - libcontainer container 9900309315f202cf2cc87073d4f155e1de8ce7b19c64f69076757a4ef4eb97df. May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.426 [WARNING][5073] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9584k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8512146f-a4f2-4ad6-9a28-559c237b8730", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340", Pod:"coredns-7db6d8ff4d-9584k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali204ab6c5f6b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.426 [INFO][5073] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.426 [INFO][5073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" iface="eth0" netns="" May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.426 [INFO][5073] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.426 [INFO][5073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.453 [INFO][5082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.453 [INFO][5082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.453 [INFO][5082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.460 [WARNING][5082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.460 [INFO][5082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.463 [INFO][5082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.468806 containerd[1464]: 2025-05-08 00:40:37.465 [INFO][5073] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.471602 containerd[1464]: time="2025-05-08T00:40:37.471556606Z" level=info msg="TearDown network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\" successfully" May 8 00:40:37.471680 containerd[1464]: time="2025-05-08T00:40:37.471602903Z" level=info msg="StopPodSandbox for \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\" returns successfully" May 8 00:40:37.472776 containerd[1464]: time="2025-05-08T00:40:37.472739957Z" level=info msg="RemovePodSandbox for \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\"" May 8 00:40:37.472838 containerd[1464]: time="2025-05-08T00:40:37.472787536Z" level=info msg="Forcibly stopping sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\"" May 8 00:40:37.537594 containerd[1464]: time="2025-05-08T00:40:37.537419528Z" level=info msg="StartContainer for \"9900309315f202cf2cc87073d4f155e1de8ce7b19c64f69076757a4ef4eb97df\" returns successfully" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.535 [WARNING][5129] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9584k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8512146f-a4f2-4ad6-9a28-559c237b8730", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c8e311842d896edc7784d5a9f83b0ca83ee35b9c0bbb03d3020d4abd9e4f340", Pod:"coredns-7db6d8ff4d-9584k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali204ab6c5f6b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.535 [INFO][5129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.535 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" iface="eth0" netns="" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.535 [INFO][5129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.535 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.565 [INFO][5149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.566 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.566 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.601 [WARNING][5149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.601 [INFO][5149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" HandleID="k8s-pod-network.eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" Workload="localhost-k8s-coredns--7db6d8ff4d--9584k-eth0" May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.603 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.610337 containerd[1464]: 2025-05-08 00:40:37.607 [INFO][5129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83" May 8 00:40:37.610947 containerd[1464]: time="2025-05-08T00:40:37.610391391Z" level=info msg="TearDown network for sandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\" successfully" May 8 00:40:37.658414 containerd[1464]: time="2025-05-08T00:40:37.658224808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:40:37.658414 containerd[1464]: time="2025-05-08T00:40:37.658330065Z" level=info msg="RemovePodSandbox \"eb5e315efac58933703b627c2487a8a3490d3a4b4ad8c571bb5812a99aaf5c83\" returns successfully" May 8 00:40:37.658962 containerd[1464]: time="2025-05-08T00:40:37.658902440Z" level=info msg="StopPodSandbox for \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\"" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.739 [WARNING][5172] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzqv5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d037e0-3318-4a96-bf33-490f8e3dd35d", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77", Pod:"csi-node-driver-gzqv5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8fd6e90ce7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.740 [INFO][5172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.740 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" iface="eth0" netns="" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.740 [INFO][5172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.740 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.762 [INFO][5182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.762 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.762 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.767 [WARNING][5182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.767 [INFO][5182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.769 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.774766 containerd[1464]: 2025-05-08 00:40:37.771 [INFO][5172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.775220 containerd[1464]: time="2025-05-08T00:40:37.774822051Z" level=info msg="TearDown network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\" successfully" May 8 00:40:37.775220 containerd[1464]: time="2025-05-08T00:40:37.774857718Z" level=info msg="StopPodSandbox for \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\" returns successfully" May 8 00:40:37.775604 containerd[1464]: time="2025-05-08T00:40:37.775544267Z" level=info msg="RemovePodSandbox for \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\"" May 8 00:40:37.775604 containerd[1464]: time="2025-05-08T00:40:37.775593810Z" level=info msg="Forcibly stopping sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\"" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.814 [WARNING][5205] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzqv5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d037e0-3318-4a96-bf33-490f8e3dd35d", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77", Pod:"csi-node-driver-gzqv5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8fd6e90ce7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.814 [INFO][5205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.814 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" iface="eth0" netns="" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.814 [INFO][5205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.814 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.836 [INFO][5213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.837 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.837 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.842 [WARNING][5213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.842 [INFO][5213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" HandleID="k8s-pod-network.a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" Workload="localhost-k8s-csi--node--driver--gzqv5-eth0" May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.844 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.848850 containerd[1464]: 2025-05-08 00:40:37.846 [INFO][5205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0" May 8 00:40:37.849392 containerd[1464]: time="2025-05-08T00:40:37.848890973Z" level=info msg="TearDown network for sandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\" successfully" May 8 00:40:37.963446 containerd[1464]: time="2025-05-08T00:40:37.963349653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:40:37.963446 containerd[1464]: time="2025-05-08T00:40:37.963447206Z" level=info msg="RemovePodSandbox \"a8418cc8aa10247bb9931bc8764bb719eac84b654db67c4000dfbc4f401231a0\" returns successfully" May 8 00:40:37.964188 containerd[1464]: time="2025-05-08T00:40:37.963943476Z" level=info msg="StopPodSandbox for \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\"" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:37.999 [WARNING][5235] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5e801bca-4ca7-4f8e-baa8-230995c21235", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b", Pod:"coredns-7db6d8ff4d-pq6w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1e5a4f5f82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.000 [INFO][5235] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.000 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" iface="eth0" netns="" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.000 [INFO][5235] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.000 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.021 [INFO][5244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.021 [INFO][5244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.021 [INFO][5244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.026 [WARNING][5244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.026 [INFO][5244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.028 [INFO][5244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:38.033094 containerd[1464]: 2025-05-08 00:40:38.030 [INFO][5235] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.033775 containerd[1464]: time="2025-05-08T00:40:38.033134513Z" level=info msg="TearDown network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\" successfully" May 8 00:40:38.033775 containerd[1464]: time="2025-05-08T00:40:38.033163106Z" level=info msg="StopPodSandbox for \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\" returns successfully" May 8 00:40:38.033775 containerd[1464]: time="2025-05-08T00:40:38.033753484Z" level=info msg="RemovePodSandbox for \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\"" May 8 00:40:38.033869 containerd[1464]: time="2025-05-08T00:40:38.033779533Z" level=info msg="Forcibly stopping sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\"" May 8 00:40:38.098526 containerd[1464]: time="2025-05-08T00:40:38.098447269Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.068 [WARNING][5266] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5e801bca-4ca7-4f8e-baa8-230995c21235", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5b8cff7ad138f7af16c96b35d3f345417713668487dd81fb6ecb879d927861b", Pod:"coredns-7db6d8ff4d-pq6w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1e5a4f5f82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.068 [INFO][5266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.068 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" iface="eth0" netns="" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.068 [INFO][5266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.068 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.088 [INFO][5274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.088 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.088 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.093 [WARNING][5274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.093 [INFO][5274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" HandleID="k8s-pod-network.81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" Workload="localhost-k8s-coredns--7db6d8ff4d--pq6w8-eth0" May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.094 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:38.098866 containerd[1464]: 2025-05-08 00:40:38.096 [INFO][5266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0" May 8 00:40:38.099329 containerd[1464]: time="2025-05-08T00:40:38.098926568Z" level=info msg="TearDown network for sandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\" successfully" May 8 00:40:38.113288 containerd[1464]: time="2025-05-08T00:40:38.113242323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:40:38.150547 containerd[1464]: time="2025-05-08T00:40:38.150441792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:40:38.150547 containerd[1464]: time="2025-05-08T00:40:38.150557489Z" level=info msg="RemovePodSandbox \"81df2011b52eca572b06af1c05e7d093bed87d454983d21b84d084f3cb46f3d0\" returns successfully" May 8 00:40:38.151459 containerd[1464]: time="2025-05-08T00:40:38.151177762Z" level=info msg="StopPodSandbox for \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\"" May 8 00:40:38.152988 containerd[1464]: time="2025-05-08T00:40:38.152919010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 764.999161ms" May 8 00:40:38.152988 containerd[1464]: time="2025-05-08T00:40:38.152977550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:38.154064 containerd[1464]: time="2025-05-08T00:40:38.154027841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:40:38.155547 containerd[1464]: time="2025-05-08T00:40:38.155499743Z" level=info msg="CreateContainer within sandbox \"d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.201 [WARNING][5297] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0", GenerateName:"calico-kube-controllers-7bb4ddbd59-", Namespace:"calico-system", SelfLink:"", UID:"6f09368a-85bc-4ff8-a22a-5897ae61119a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb4ddbd59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d", Pod:"calico-kube-controllers-7bb4ddbd59-kfxqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali353f4bde467", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.201 [INFO][5297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.202 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" iface="eth0" netns="" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.202 [INFO][5297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.202 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.222 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.222 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.222 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.228 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.228 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.229 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:38.234634 containerd[1464]: 2025-05-08 00:40:38.232 [INFO][5297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.234634 containerd[1464]: time="2025-05-08T00:40:38.234602737Z" level=info msg="TearDown network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\" successfully" May 8 00:40:38.234634 containerd[1464]: time="2025-05-08T00:40:38.234636571Z" level=info msg="StopPodSandbox for \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\" returns successfully" May 8 00:40:38.235334 containerd[1464]: time="2025-05-08T00:40:38.235296540Z" level=info msg="RemovePodSandbox for \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\"" May 8 00:40:38.235381 containerd[1464]: time="2025-05-08T00:40:38.235338388Z" level=info msg="Forcibly stopping sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\"" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.376 [WARNING][5328] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0", GenerateName:"calico-kube-controllers-7bb4ddbd59-", Namespace:"calico-system", SelfLink:"", UID:"6f09368a-85bc-4ff8-a22a-5897ae61119a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb4ddbd59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63de5e79a23b3965b060b9d4c4bbcb22d26147450f4a667b3fa836e381acbd9d", Pod:"calico-kube-controllers-7bb4ddbd59-kfxqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali353f4bde467", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.376 [INFO][5328] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.376 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" iface="eth0" netns="" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.376 [INFO][5328] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.376 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.399 [INFO][5338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.400 [INFO][5338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.400 [INFO][5338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.406 [WARNING][5338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.406 [INFO][5338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" HandleID="k8s-pod-network.b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" Workload="localhost-k8s-calico--kube--controllers--7bb4ddbd59--kfxqq-eth0" May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.408 [INFO][5338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:38.413216 containerd[1464]: 2025-05-08 00:40:38.410 [INFO][5328] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd" May 8 00:40:38.413676 containerd[1464]: time="2025-05-08T00:40:38.413264313Z" level=info msg="TearDown network for sandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\" successfully" May 8 00:40:38.517182 containerd[1464]: time="2025-05-08T00:40:38.517014911Z" level=info msg="CreateContainer within sandbox \"d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"49bc6213029ce30cb22a9d59ec1a4fac5ffeee726a24dcbb5f6c6d23367d6fcc\"" May 8 00:40:38.517978 containerd[1464]: time="2025-05-08T00:40:38.517932724Z" level=info msg="StartContainer for \"49bc6213029ce30cb22a9d59ec1a4fac5ffeee726a24dcbb5f6c6d23367d6fcc\"" May 8 00:40:38.601091 systemd[1]: run-containerd-runc-k8s.io-49bc6213029ce30cb22a9d59ec1a4fac5ffeee726a24dcbb5f6c6d23367d6fcc-runc.ahrBVs.mount: Deactivated successfully. May 8 00:40:38.608082 containerd[1464]: time="2025-05-08T00:40:38.606054343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:40:38.608082 containerd[1464]: time="2025-05-08T00:40:38.606139873Z" level=info msg="RemovePodSandbox \"b7c5cb5724f9706e30bd83bad9f5ddb0cac9697f25a6c11ae5dd9a40da752fbd\" returns successfully" May 8 00:40:38.608082 containerd[1464]: time="2025-05-08T00:40:38.606905389Z" level=info msg="StopPodSandbox for \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\"" May 8 00:40:38.609470 systemd[1]: Started cri-containerd-49bc6213029ce30cb22a9d59ec1a4fac5ffeee726a24dcbb5f6c6d23367d6fcc.scope - libcontainer container 49bc6213029ce30cb22a9d59ec1a4fac5ffeee726a24dcbb5f6c6d23367d6fcc. May 8 00:40:38.860453 containerd[1464]: time="2025-05-08T00:40:38.860289920Z" level=info msg="StartContainer for \"49bc6213029ce30cb22a9d59ec1a4fac5ffeee726a24dcbb5f6c6d23367d6fcc\" returns successfully" May 8 00:40:38.876480 kubelet[2571]: I0508 00:40:38.876376 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bb4ddbd59-kfxqq" podStartSLOduration=32.564936722 podStartE2EDuration="37.876352603s" podCreationTimestamp="2025-05-08 00:40:01 +0000 UTC" firstStartedPulling="2025-05-08 00:40:32.076075895 +0000 UTC m=+55.240338771" lastFinishedPulling="2025-05-08 00:40:37.387491776 +0000 UTC m=+60.551754652" observedRunningTime="2025-05-08 00:40:38.681898131 +0000 UTC m=+61.846161007" watchObservedRunningTime="2025-05-08 00:40:38.876352603 +0000 UTC m=+62.040615479" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.875 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d13fad7-9b52-4242-8c83-7d9a65d72e32", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21", Pod:"calico-apiserver-7f5d787db9-gdzhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9fae3c1e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.875 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.875 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" iface="eth0" netns="" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.875 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.875 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.899 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.899 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.899 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.930 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.930 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.931 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:38.938340 containerd[1464]: 2025-05-08 00:40:38.935 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:38.938986 containerd[1464]: time="2025-05-08T00:40:38.938383159Z" level=info msg="TearDown network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\" successfully" May 8 00:40:38.938986 containerd[1464]: time="2025-05-08T00:40:38.938414277Z" level=info msg="StopPodSandbox for \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\" returns successfully" May 8 00:40:38.939703 containerd[1464]: time="2025-05-08T00:40:38.939237883Z" level=info msg="RemovePodSandbox for \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\"" May 8 00:40:38.939703 containerd[1464]: time="2025-05-08T00:40:38.939297124Z" level=info msg="Forcibly stopping sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\"" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.015 [WARNING][5454] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0", GenerateName:"calico-apiserver-7f5d787db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d13fad7-9b52-4242-8c83-7d9a65d72e32", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5d787db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2c1911686a9357af7c7686212bd6baa413e35979eed15ab7c27c295c3bfcb21", Pod:"calico-apiserver-7f5d787db9-gdzhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9fae3c1e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.015 [INFO][5454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.015 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" iface="eth0" netns="" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.015 [INFO][5454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.015 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.040 [INFO][5463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.040 [INFO][5463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.040 [INFO][5463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.045 [WARNING][5463] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.045 [INFO][5463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" HandleID="k8s-pod-network.5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" Workload="localhost-k8s-calico--apiserver--7f5d787db9--gdzhq-eth0" May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.122 [INFO][5463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:39.127916 containerd[1464]: 2025-05-08 00:40:39.125 [INFO][5454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5" May 8 00:40:39.127916 containerd[1464]: time="2025-05-08T00:40:39.127887596Z" level=info msg="TearDown network for sandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\" successfully" May 8 00:40:39.180535 containerd[1464]: time="2025-05-08T00:40:39.180470309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:40:39.180535 containerd[1464]: time="2025-05-08T00:40:39.180551571Z" level=info msg="RemovePodSandbox \"5e38fbab46ba64fb0fd11a5d794f9b564ceeeb18db78ab3516fbab75035b0dd5\" returns successfully" May 8 00:40:39.511564 kubelet[2571]: I0508 00:40:39.510779 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f5d787db9-gdzhq" podStartSLOduration=32.490473816 podStartE2EDuration="38.510758098s" podCreationTimestamp="2025-05-08 00:40:01 +0000 UTC" firstStartedPulling="2025-05-08 00:40:32.133518506 +0000 UTC m=+55.297781382" lastFinishedPulling="2025-05-08 00:40:38.153802788 +0000 UTC m=+61.318065664" observedRunningTime="2025-05-08 00:40:39.510500635 +0000 UTC m=+62.674763511" watchObservedRunningTime="2025-05-08 00:40:39.510758098 +0000 UTC m=+62.675020974" May 8 00:40:40.158922 containerd[1464]: time="2025-05-08T00:40:40.158837548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:40.160176 containerd[1464]: time="2025-05-08T00:40:40.160115466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:40:40.161582 containerd[1464]: time="2025-05-08T00:40:40.161516374Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:40.164300 containerd[1464]: time="2025-05-08T00:40:40.164265353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:40.164882 containerd[1464]: time="2025-05-08T00:40:40.164848127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.010778147s" May 8 00:40:40.164882 containerd[1464]: time="2025-05-08T00:40:40.164889945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:40:40.167277 containerd[1464]: time="2025-05-08T00:40:40.167238843Z" level=info msg="CreateContainer within sandbox \"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:40:40.180468 containerd[1464]: time="2025-05-08T00:40:40.180411702Z" level=info msg="CreateContainer within sandbox \"b9d7b3cf44d128d3124b661d035b500aad429763e8a96cf6c6c08e95c9ee5b77\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c099669be1bdf99afe9cd1e5784095b6c14e2420681917a00b2889fa0887c68b\"" May 8 00:40:40.180989 containerd[1464]: time="2025-05-08T00:40:40.180957465Z" level=info msg="StartContainer for \"c099669be1bdf99afe9cd1e5784095b6c14e2420681917a00b2889fa0887c68b\"" May 8 00:40:40.221805 systemd[1]: Started cri-containerd-c099669be1bdf99afe9cd1e5784095b6c14e2420681917a00b2889fa0887c68b.scope - libcontainer container c099669be1bdf99afe9cd1e5784095b6c14e2420681917a00b2889fa0887c68b. May 8 00:40:40.255706 containerd[1464]: time="2025-05-08T00:40:40.255603167Z" level=info msg="StartContainer for \"c099669be1bdf99afe9cd1e5784095b6c14e2420681917a00b2889fa0887c68b\" returns successfully" May 8 00:40:40.521331 kubelet[2571]: I0508 00:40:40.519157 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gzqv5" podStartSLOduration=28.559355844 podStartE2EDuration="39.519135894s" podCreationTimestamp="2025-05-08 00:40:01 +0000 UTC" firstStartedPulling="2025-05-08 00:40:29.205900229 +0000 UTC m=+52.370163105" lastFinishedPulling="2025-05-08 00:40:40.165680279 +0000 UTC m=+63.329943155" observedRunningTime="2025-05-08 00:40:40.519011711 +0000 UTC m=+63.683274587" watchObservedRunningTime="2025-05-08 00:40:40.519135894 +0000 UTC m=+63.683398760" May 8 00:40:41.000295 kubelet[2571]: I0508 00:40:41.000227 2571 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:40:41.000295 kubelet[2571]: I0508 00:40:41.000264 2571 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:40:41.237174 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:52576.service - OpenSSH per-connection server daemon (10.0.0.1:52576). May 8 00:40:41.284012 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 52576 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:41.285929 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:41.290966 systemd-logind[1453]: New session 17 of user core. May 8 00:40:41.301828 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:40:41.443045 sshd[5518]: pam_unix(sshd:session): session closed for user core May 8 00:40:41.447791 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:52576.service: Deactivated successfully. May 8 00:40:41.449949 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:40:41.450571 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. May 8 00:40:41.451841 systemd-logind[1453]: Removed session 17. May 8 00:40:44.082383 kubelet[2571]: E0508 00:40:44.082313 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:44.186115 systemd[1]: run-containerd-runc-k8s.io-0babc74d57f6c4f322cbf0b94e7a3746c0b0f453d219770e8b6a9691caba9494-runc.8JYKT9.mount: Deactivated successfully. May 8 00:40:44.514414 kubelet[2571]: E0508 00:40:44.514371 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:46.455178 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:52588.service - OpenSSH per-connection server daemon (10.0.0.1:52588). May 8 00:40:46.490808 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 52588 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:46.492512 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:46.497044 systemd-logind[1453]: New session 18 of user core. May 8 00:40:46.507789 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:40:46.620073 sshd[5601]: pam_unix(sshd:session): session closed for user core May 8 00:40:46.629344 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:52588.service: Deactivated successfully. May 8 00:40:46.631746 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:40:46.634050 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. May 8 00:40:46.648015 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:40538.service - OpenSSH per-connection server daemon (10.0.0.1:40538). May 8 00:40:46.649034 systemd-logind[1453]: Removed session 18. May 8 00:40:46.677875 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 40538 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:46.679631 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:46.683594 systemd-logind[1453]: New session 19 of user core. May 8 00:40:46.690785 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:40:47.023968 sshd[5615]: pam_unix(sshd:session): session closed for user core May 8 00:40:47.037236 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:40538.service: Deactivated successfully. May 8 00:40:47.039431 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:40:47.041811 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. May 8 00:40:47.048122 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:40548.service - OpenSSH per-connection server daemon (10.0.0.1:40548). May 8 00:40:47.049470 systemd-logind[1453]: Removed session 19. May 8 00:40:47.085355 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 40548 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:47.087408 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:47.092346 systemd-logind[1453]: New session 20 of user core. May 8 00:40:47.098920 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:40:49.572528 sshd[5633]: pam_unix(sshd:session): session closed for user core May 8 00:40:49.584982 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:40548.service: Deactivated successfully. May 8 00:40:49.587169 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:40:49.589010 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. May 8 00:40:49.597174 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:40554.service - OpenSSH per-connection server daemon (10.0.0.1:40554). May 8 00:40:49.598421 systemd-logind[1453]: Removed session 20. May 8 00:40:49.628952 sshd[5657]: Accepted publickey for core from 10.0.0.1 port 40554 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:49.630868 sshd[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:49.635262 systemd-logind[1453]: New session 21 of user core. May 8 00:40:49.644814 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:40:50.105936 sshd[5657]: pam_unix(sshd:session): session closed for user core May 8 00:40:50.117294 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:40554.service: Deactivated successfully. May 8 00:40:50.119315 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:40:50.121544 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. May 8 00:40:50.136173 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:40562.service - OpenSSH per-connection server daemon (10.0.0.1:40562). May 8 00:40:50.137956 systemd-logind[1453]: Removed session 21. May 8 00:40:50.169558 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 40562 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:50.171443 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:50.177901 systemd-logind[1453]: New session 22 of user core. May 8 00:40:50.181841 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:40:50.307776 sshd[5669]: pam_unix(sshd:session): session closed for user core May 8 00:40:50.312758 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:40562.service: Deactivated successfully. May 8 00:40:50.315201 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:40:50.316356 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. May 8 00:40:50.317821 systemd-logind[1453]: Removed session 22. May 8 00:40:51.928378 kubelet[2571]: E0508 00:40:51.928333 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:55.326055 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:40570.service - OpenSSH per-connection server daemon (10.0.0.1:40570). May 8 00:40:55.362322 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 40570 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:55.364931 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:55.369712 systemd-logind[1453]: New session 23 of user core. May 8 00:40:55.377817 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:40:55.509897 sshd[5687]: pam_unix(sshd:session): session closed for user core May 8 00:40:55.514448 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:40570.service: Deactivated successfully. May 8 00:40:55.516885 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:40:55.517549 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. May 8 00:40:55.518592 systemd-logind[1453]: Removed session 23. May 8 00:40:58.928548 kubelet[2571]: E0508 00:40:58.928494 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:00.521741 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:43350.service - OpenSSH per-connection server daemon (10.0.0.1:43350). May 8 00:41:00.558735 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 43350 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:00.560450 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:00.564367 systemd-logind[1453]: New session 24 of user core. May 8 00:41:00.580815 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:41:00.720327 sshd[5704]: pam_unix(sshd:session): session closed for user core May 8 00:41:00.725157 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:43350.service: Deactivated successfully. May 8 00:41:00.727464 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:41:00.728203 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. May 8 00:41:00.729071 systemd-logind[1453]: Removed session 24. May 8 00:41:00.928545 kubelet[2571]: E0508 00:41:00.928374 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:05.731825 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:43354.service - OpenSSH per-connection server daemon (10.0.0.1:43354). May 8 00:41:05.767160 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 43354 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:05.768823 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:05.773786 systemd-logind[1453]: New session 25 of user core. May 8 00:41:05.780837 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:41:05.896003 sshd[5719]: pam_unix(sshd:session): session closed for user core May 8 00:41:05.900458 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:43354.service: Deactivated successfully. May 8 00:41:05.902455 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:41:05.903081 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. May 8 00:41:05.904082 systemd-logind[1453]: Removed session 25. May 8 00:41:10.908508 systemd[1]: Started sshd@25-10.0.0.74:22-10.0.0.1:38102.service - OpenSSH per-connection server daemon (10.0.0.1:38102). May 8 00:41:10.946251 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 38102 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:10.948400 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:10.953172 systemd-logind[1453]: New session 26 of user core. May 8 00:41:10.961850 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:41:11.082921 sshd[5740]: pam_unix(sshd:session): session closed for user core May 8 00:41:11.087394 systemd[1]: sshd@25-10.0.0.74:22-10.0.0.1:38102.service: Deactivated successfully. May 8 00:41:11.090111 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:41:11.090907 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. May 8 00:41:11.092548 systemd-logind[1453]: Removed session 26.