May 8 00:46:55.902600 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:46:55.902622 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:46:55.902633 kernel: BIOS-provided physical RAM map: May 8 00:46:55.902639 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:46:55.902645 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:46:55.902651 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:46:55.902658 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:46:55.902665 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:46:55.902671 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:46:55.902677 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:46:55.902686 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 8 00:46:55.902692 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 8 00:46:55.902699 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 8 00:46:55.902705 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 8 00:46:55.902713 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:46:55.902782 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:46:55.902791 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:46:55.902798 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:46:55.902805 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:46:55.902811 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:46:55.902818 kernel: NX (Execute Disable) protection: active May 8 00:46:55.902825 kernel: APIC: Static calls initialized May 8 00:46:55.902831 kernel: efi: EFI v2.7 by EDK II May 8 00:46:55.902838 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 8 00:46:55.902845 kernel: SMBIOS 2.8 present. May 8 00:46:55.902851 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 8 00:46:55.902858 kernel: Hypervisor detected: KVM May 8 00:46:55.902867 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:46:55.902874 kernel: kvm-clock: using sched offset of 4309281370 cycles May 8 00:46:55.902881 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:46:55.902888 kernel: tsc: Detected 2794.748 MHz processor May 8 00:46:55.902895 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:46:55.902902 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:46:55.902909 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 8 00:46:55.902916 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:46:55.902923 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:46:55.902932 kernel: Using GB pages for direct mapping May 8 00:46:55.902939 kernel: Secure boot disabled May 8 00:46:55.902946 kernel: ACPI: Early table checksum verification disabled May 8 00:46:55.902961 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:46:55.902971 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:46:55.902979 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:55.902986 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:55.902995 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:46:55.903003 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:55.903010 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:55.903017 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:55.903024 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:55.903031 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:46:55.903038 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:46:55.903048 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:46:55.903055 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:46:55.903062 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:46:55.903069 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:46:55.903076 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:46:55.903083 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:46:55.903090 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:46:55.903097 kernel: No NUMA configuration found May 8 00:46:55.903104 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 8 00:46:55.903115 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 8 00:46:55.903122 kernel: Zone ranges: May 8 00:46:55.903129 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:46:55.903136 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 8 00:46:55.903143 kernel: Normal empty May 8 00:46:55.903150 kernel: Movable zone start for each node May 8 00:46:55.903157 kernel: Early memory node ranges May 8 00:46:55.903164 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:46:55.903171 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:46:55.903178 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:46:55.903188 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 8 00:46:55.903196 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 8 00:46:55.903203 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 8 00:46:55.903210 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 8 00:46:55.903217 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:46:55.903224 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:46:55.903231 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:46:55.903238 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:46:55.903246 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 8 00:46:55.903255 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:46:55.903262 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 8 00:46:55.903269 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:46:55.903277 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:46:55.903284 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:46:55.903291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:46:55.903298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:46:55.903305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:46:55.903313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:46:55.903322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:46:55.903329 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:46:55.903337 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:46:55.903344 kernel: TSC deadline timer available May 8 00:46:55.903351 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:46:55.903358 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:46:55.903365 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:46:55.903372 kernel: kvm-guest: setup PV sched yield May 8 00:46:55.903379 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 8 00:46:55.903386 kernel: Booting paravirtualized kernel on KVM May 8 00:46:55.903396 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:46:55.903404 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:46:55.903411 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:46:55.903418 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:46:55.903425 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:46:55.903432 kernel: kvm-guest: PV spinlocks enabled May 8 00:46:55.903439 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:46:55.903449 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:46:55.903458 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:46:55.903466 kernel: random: crng init done May 8 00:46:55.903473 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:46:55.903480 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:46:55.903487 kernel: Fallback order for Node 0: 0 May 8 00:46:55.903495 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 8 00:46:55.903502 kernel: Policy zone: DMA32 May 8 00:46:55.903509 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:46:55.903516 kernel: Memory: 2400596K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 166144K reserved, 0K cma-reserved) May 8 00:46:55.903526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:46:55.903533 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:46:55.903540 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:46:55.903548 kernel: Dynamic Preempt: voluntary May 8 00:46:55.903563 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:46:55.903577 kernel: rcu: RCU event tracing is enabled. May 8 00:46:55.903585 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:46:55.903592 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:46:55.903600 kernel: Rude variant of Tasks RCU enabled. May 8 00:46:55.903608 kernel: Tracing variant of Tasks RCU enabled. May 8 00:46:55.903615 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:46:55.903623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:46:55.903632 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:46:55.903640 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:46:55.903647 kernel: Console: colour dummy device 80x25 May 8 00:46:55.903655 kernel: printk: console [ttyS0] enabled May 8 00:46:55.903662 kernel: ACPI: Core revision 20230628 May 8 00:46:55.903672 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:46:55.903680 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:46:55.903687 kernel: x2apic enabled May 8 00:46:55.903695 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:46:55.903702 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:46:55.903710 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:46:55.903728 kernel: kvm-guest: setup PV IPIs May 8 00:46:55.903736 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:46:55.903744 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:46:55.903754 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:46:55.903761 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:46:55.903769 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:46:55.903776 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:46:55.903784 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:46:55.903791 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:46:55.903799 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:46:55.903806 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:46:55.903814 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:46:55.903824 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:46:55.903832 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:46:55.903839 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:46:55.903847 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:46:55.903855 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:46:55.903863 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:46:55.903870 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:46:55.903878 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:46:55.903887 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:46:55.903895 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:46:55.903902 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:46:55.903910 kernel: Freeing SMP alternatives memory: 32K May 8 00:46:55.903917 kernel: pid_max: default: 32768 minimum: 301 May 8 00:46:55.903925 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:46:55.903932 kernel: landlock: Up and running. May 8 00:46:55.903940 kernel: SELinux: Initializing. May 8 00:46:55.903947 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:46:55.903964 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:46:55.903972 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:46:55.903979 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:46:55.903987 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:46:55.903995 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:46:55.904002 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:46:55.904009 kernel: ... version: 0 May 8 00:46:55.904017 kernel: ... bit width: 48 May 8 00:46:55.904024 kernel: ... generic registers: 6 May 8 00:46:55.904034 kernel: ... value mask: 0000ffffffffffff May 8 00:46:55.904041 kernel: ... max period: 00007fffffffffff May 8 00:46:55.904049 kernel: ... fixed-purpose events: 0 May 8 00:46:55.904056 kernel: ... event mask: 000000000000003f May 8 00:46:55.904064 kernel: signal: max sigframe size: 1776 May 8 00:46:55.904071 kernel: rcu: Hierarchical SRCU implementation. May 8 00:46:55.904079 kernel: rcu: Max phase no-delay instances is 400. May 8 00:46:55.904086 kernel: smp: Bringing up secondary CPUs ... May 8 00:46:55.904094 kernel: smpboot: x86: Booting SMP configuration: May 8 00:46:55.904103 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:46:55.904111 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:46:55.904118 kernel: smpboot: Max logical packages: 1 May 8 00:46:55.904125 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:46:55.904133 kernel: devtmpfs: initialized May 8 00:46:55.904140 kernel: x86/mm: Memory block size: 128MB May 8 00:46:55.904148 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:46:55.904155 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:46:55.904163 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 8 00:46:55.904173 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:46:55.904181 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:46:55.904188 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:46:55.904196 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:46:55.904203 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:46:55.904211 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:46:55.904218 kernel: audit: initializing netlink subsys (disabled) May 8 00:46:55.904226 kernel: audit: type=2000 audit(1746665215.854:1): state=initialized audit_enabled=0 res=1 May 8 00:46:55.904233 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:46:55.904244 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:46:55.904251 kernel: cpuidle: using governor menu May 8 00:46:55.904258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:46:55.904266 kernel: dca service started, version 1.12.1 May 8 00:46:55.904273 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:46:55.904281 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:46:55.904289 kernel: PCI: Using configuration type 1 for base access May 8 00:46:55.904296 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:46:55.904304 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:46:55.904314 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:46:55.904321 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:46:55.904328 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:46:55.904336 kernel: ACPI: Added _OSI(Module Device) May 8 00:46:55.904343 kernel: ACPI: Added _OSI(Processor Device) May 8 00:46:55.904351 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:46:55.904358 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:46:55.904366 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:46:55.904373 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:46:55.904383 kernel: ACPI: Interpreter enabled May 8 00:46:55.904390 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:46:55.904398 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:46:55.904405 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:46:55.904413 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:46:55.904420 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:46:55.904428 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:46:55.904603 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:46:55.904839 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:46:55.904988 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:46:55.904999 kernel: PCI host bridge to bus 0000:00 May 8 00:46:55.905124 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:46:55.905235 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:46:55.905345 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:46:55.905457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:46:55.905574 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:46:55.905685 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 8 00:46:55.905812 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:46:55.905951 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:46:55.906114 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:46:55.906308 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:46:55.906495 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:46:55.906640 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:46:55.906784 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:46:55.906906 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:46:55.907051 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:46:55.907176 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:46:55.907298 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:46:55.907424 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 8 00:46:55.907559 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:46:55.907742 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:46:55.907896 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:46:55.908028 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 8 00:46:55.908160 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:46:55.908282 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:46:55.908408 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:46:55.908527 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 8 00:46:55.908704 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:46:55.908855 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:46:55.908987 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:46:55.909117 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:46:55.909244 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:46:55.909365 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:46:55.909496 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:46:55.909617 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:46:55.909627 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:46:55.909635 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:46:55.909642 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:46:55.909650 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:46:55.909661 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:46:55.909669 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:46:55.909676 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:46:55.909684 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:46:55.909692 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:46:55.909699 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:46:55.909707 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:46:55.909714 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:46:55.909735 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:46:55.909745 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:46:55.909753 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:46:55.909760 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:46:55.909767 kernel: iommu: Default domain type: Translated May 8 00:46:55.909775 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:46:55.909783 kernel: efivars: Registered efivars operations May 8 00:46:55.909790 kernel: PCI: Using ACPI for IRQ routing May 8 00:46:55.909797 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:46:55.909805 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:46:55.909814 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 8 00:46:55.909822 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 8 00:46:55.909829 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 8 00:46:55.909961 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:46:55.910083 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:46:55.910204 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:46:55.910214 kernel: vgaarb: loaded May 8 00:46:55.910221 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:46:55.910229 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:46:55.910240 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:46:55.910248 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:46:55.910255 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:46:55.910263 kernel: pnp: PnP ACPI init May 8 00:46:55.910394 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:46:55.910405 kernel: pnp: PnP ACPI: found 6 devices May 8 00:46:55.910413 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:46:55.910421 kernel: NET: Registered PF_INET protocol family May 8 00:46:55.910432 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:46:55.910439 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:46:55.910447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:46:55.910455 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:46:55.910463 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:46:55.910470 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:46:55.910478 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:46:55.910485 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:46:55.910493 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:46:55.910502 kernel: NET: Registered PF_XDP protocol family May 8 00:46:55.910625 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:46:55.910815 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:46:55.910930 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:46:55.911049 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:46:55.911158 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:46:55.911267 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:46:55.911375 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:46:55.911488 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 8 00:46:55.911499 kernel: PCI: CLS 0 bytes, default 64 May 8 00:46:55.911506 kernel: Initialise system trusted keyrings May 8 00:46:55.911514 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:46:55.911522 kernel: Key type asymmetric registered May 8 00:46:55.911529 kernel: Asymmetric key parser 'x509' registered May 8 00:46:55.911537 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:46:55.911544 kernel: io scheduler mq-deadline registered May 8 00:46:55.911555 kernel: io scheduler kyber registered May 8 00:46:55.911562 kernel: io scheduler bfq registered May 8 00:46:55.911570 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:46:55.911578 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:46:55.911585 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:46:55.911593 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:46:55.911601 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:46:55.911608 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:46:55.911616 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:46:55.911623 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:46:55.911633 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:46:55.911776 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:46:55.911892 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:46:55.911903 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:46:55.912028 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:46:55 UTC (1746665215) May 8 00:46:55.912140 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:46:55.912150 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:46:55.912162 kernel: efifb: probing for efifb May 8 00:46:55.912169 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 8 00:46:55.912177 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 8 00:46:55.912185 kernel: efifb: scrolling: redraw May 8 00:46:55.912192 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 8 00:46:55.912200 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:46:55.912227 kernel: fb0: EFI VGA frame buffer device May 8 00:46:55.912237 kernel: pstore: Using crash dump compression: deflate May 8 00:46:55.912244 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:46:55.912254 kernel: NET: Registered PF_INET6 protocol family May 8 00:46:55.912262 kernel: Segment Routing with IPv6 May 8 00:46:55.912270 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:46:55.912277 kernel: NET: Registered PF_PACKET protocol family May 8 00:46:55.912285 kernel: Key type dns_resolver registered May 8 00:46:55.912293 kernel: IPI shorthand broadcast: enabled May 8 00:46:55.912301 kernel: sched_clock: Marking stable (598002636, 119302401)->(732497465, -15192428) May 8 00:46:55.912309 kernel: registered taskstats version 1 May 8 00:46:55.912317 kernel: Loading compiled-in X.509 certificates May 8 00:46:55.912325 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:46:55.912335 kernel: Key type .fscrypt registered May 8 00:46:55.912343 kernel: Key type fscrypt-provisioning registered May 8 00:46:55.912350 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:46:55.912358 kernel: ima: Allocated hash algorithm: sha1 May 8 00:46:55.912366 kernel: ima: No architecture policies found May 8 00:46:55.912374 kernel: clk: Disabling unused clocks May 8 00:46:55.912381 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:46:55.912389 kernel: Write protecting the kernel read-only data: 36864k May 8 00:46:55.912399 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:46:55.912407 kernel: Run /init as init process May 8 00:46:55.912414 kernel: with arguments: May 8 00:46:55.912422 kernel: /init May 8 00:46:55.912430 kernel: with environment: May 8 00:46:55.912437 kernel: HOME=/ May 8 00:46:55.912447 kernel: TERM=linux May 8 00:46:55.912455 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:46:55.912466 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:46:55.912478 systemd[1]: Detected virtualization kvm. May 8 00:46:55.912487 systemd[1]: Detected architecture x86-64. May 8 00:46:55.912495 systemd[1]: Running in initrd. May 8 00:46:55.912506 systemd[1]: No hostname configured, using default hostname. May 8 00:46:55.912516 systemd[1]: Hostname set to . May 8 00:46:55.912525 systemd[1]: Initializing machine ID from VM UUID. May 8 00:46:55.912533 systemd[1]: Queued start job for default target initrd.target. May 8 00:46:55.912541 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:46:55.912550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:46:55.912559 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:46:55.912567 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:46:55.912576 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:46:55.912587 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:46:55.912597 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:46:55.912606 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:46:55.912614 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:46:55.912622 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:46:55.912631 systemd[1]: Reached target paths.target - Path Units. May 8 00:46:55.912639 systemd[1]: Reached target slices.target - Slice Units. May 8 00:46:55.912650 systemd[1]: Reached target swap.target - Swaps. May 8 00:46:55.912658 systemd[1]: Reached target timers.target - Timer Units. May 8 00:46:55.912666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:46:55.912674 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:46:55.912683 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:46:55.912691 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:46:55.912699 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:46:55.912708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:46:55.912731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:46:55.912739 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:46:55.912748 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:46:55.912756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:46:55.912764 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:46:55.912772 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:46:55.912781 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:46:55.912789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:46:55.912797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:46:55.912808 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:46:55.912816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:46:55.912843 systemd-journald[192]: Collecting audit messages is disabled. May 8 00:46:55.912861 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:46:55.912873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:46:55.912882 systemd-journald[192]: Journal started May 8 00:46:55.912902 systemd-journald[192]: Runtime Journal (/run/log/journal/fd5ed05e5ee049439ac3d1d35ee368e2) is 6.0M, max 48.3M, 42.2M free. May 8 00:46:55.906581 systemd-modules-load[194]: Inserted module 'overlay' May 8 00:46:55.916482 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:46:55.917016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:46:55.919503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:46:55.933747 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:46:55.936100 systemd-modules-load[194]: Inserted module 'br_netfilter' May 8 00:46:55.937096 kernel: Bridge firewalling registered May 8 00:46:55.937202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:46:55.940302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:46:55.943304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:46:55.945941 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:46:55.949448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:46:55.954698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:46:55.958644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:46:55.960019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:46:55.961270 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:46:55.970929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:46:55.972831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:46:55.978215 dracut-cmdline[226]: dracut-dracut-053 May 8 00:46:55.987585 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:46:56.020450 systemd-resolved[233]: Positive Trust Anchors: May 8 00:46:56.020466 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:46:56.020497 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:46:56.022949 systemd-resolved[233]: Defaulting to hostname 'linux'. May 8 00:46:56.023985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:46:56.030032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:46:56.085755 kernel: SCSI subsystem initialized May 8 00:46:56.094740 kernel: Loading iSCSI transport class v2.0-870. May 8 00:46:56.104743 kernel: iscsi: registered transport (tcp) May 8 00:46:56.127760 kernel: iscsi: registered transport (qla4xxx) May 8 00:46:56.127808 kernel: QLogic iSCSI HBA Driver May 8 00:46:56.183668 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:46:56.194887 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:46:56.220769 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:46:56.220835 kernel: device-mapper: uevent: version 1.0.3 May 8 00:46:56.220847 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:46:56.264758 kernel: raid6: avx2x4 gen() 29791 MB/s May 8 00:46:56.281751 kernel: raid6: avx2x2 gen() 30232 MB/s May 8 00:46:56.298851 kernel: raid6: avx2x1 gen() 25605 MB/s May 8 00:46:56.298885 kernel: raid6: using algorithm avx2x2 gen() 30232 MB/s May 8 00:46:56.316859 kernel: raid6: .... xor() 19894 MB/s, rmw enabled May 8 00:46:56.316897 kernel: raid6: using avx2x2 recovery algorithm May 8 00:46:56.336748 kernel: xor: automatically using best checksumming function avx May 8 00:46:56.495761 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:46:56.509168 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:46:56.525057 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:46:56.536526 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 8 00:46:56.540756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:46:56.550896 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:46:56.566490 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation May 8 00:46:56.599066 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:46:56.612879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:46:56.679222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:46:56.691008 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:46:56.704072 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:46:56.708218 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:46:56.711811 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:46:56.717273 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:46:56.745575 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:46:56.745775 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:46:56.745788 kernel: libata version 3.00 loaded. May 8 00:46:56.745799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:46:56.745809 kernel: GPT:9289727 != 19775487 May 8 00:46:56.745827 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:46:56.745837 kernel: GPT:9289727 != 19775487 May 8 00:46:56.745847 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:46:56.745857 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:56.714978 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:46:56.731039 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:46:56.745959 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:46:56.753296 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:46:56.791576 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:46:56.791596 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:46:56.791764 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:46:56.791909 kernel: scsi host0: ahci May 8 00:46:56.792067 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:46:56.792079 kernel: AES CTR mode by8 optimization enabled May 8 00:46:56.792089 kernel: scsi host1: ahci May 8 00:46:56.792235 kernel: scsi host2: ahci May 8 00:46:56.792374 kernel: scsi host3: ahci May 8 00:46:56.792517 kernel: scsi host4: ahci May 8 00:46:56.792656 kernel: scsi host5: ahci May 8 00:46:56.792818 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (458) May 8 00:46:56.792829 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:46:56.792840 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:46:56.792850 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:46:56.792860 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:46:56.792870 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:46:56.792884 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:46:56.753319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:46:56.753486 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:46:56.755299 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:46:56.800361 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (472) May 8 00:46:56.759651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:46:56.759823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:46:56.761435 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:46:56.768928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:46:56.807370 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:46:56.810409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:46:56.821925 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:46:56.828427 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:46:56.831889 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:46:56.839191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:46:56.851913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:46:56.859239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:46:56.860433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:46:56.863393 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:46:56.866558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:46:56.871573 disk-uuid[552]: Primary Header is updated. May 8 00:46:56.871573 disk-uuid[552]: Secondary Entries is updated. May 8 00:46:56.871573 disk-uuid[552]: Secondary Header is updated. May 8 00:46:56.874807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:56.886432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:46:56.896896 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:46:56.919287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:46:57.100756 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:46:57.100841 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:46:57.101744 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:46:57.101759 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:46:57.102757 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:46:57.103746 kernel: ata3.00: applying bridge limits May 8 00:46:57.103760 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:46:57.104740 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:46:57.105746 kernel: ata3.00: configured for UDMA/100 May 8 00:46:57.107748 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:46:57.152287 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:46:57.164317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:46:57.164335 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:46:57.881744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:57.882219 disk-uuid[554]: The operation has completed successfully. May 8 00:46:57.908266 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:46:57.908411 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:46:57.936888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:46:57.940658 sh[597]: Success May 8 00:46:57.952764 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:46:57.985591 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:46:57.999108 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:46:58.005033 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:46:58.015244 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:46:58.015310 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:46:58.015326 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:46:58.016284 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:46:58.017042 kernel: BTRFS info (device dm-0): using free space tree May 8 00:46:58.022133 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:46:58.023807 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:46:58.036836 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:46:58.038506 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:46:58.047446 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:46:58.047484 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:46:58.047495 kernel: BTRFS info (device vda6): using free space tree May 8 00:46:58.050748 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:46:58.059680 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:46:58.061551 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:46:58.073786 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:46:58.082930 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:46:58.132584 ignition[695]: Ignition 2.19.0 May 8 00:46:58.132595 ignition[695]: Stage: fetch-offline May 8 00:46:58.132632 ignition[695]: no configs at "/usr/lib/ignition/base.d" May 8 00:46:58.132642 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:58.132775 ignition[695]: parsed url from cmdline: "" May 8 00:46:58.132779 ignition[695]: no config URL provided May 8 00:46:58.132785 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:46:58.132794 ignition[695]: no config at "/usr/lib/ignition/user.ign" May 8 00:46:58.132821 ignition[695]: op(1): [started] loading QEMU firmware config module May 8 00:46:58.132826 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:46:58.141193 ignition[695]: op(1): [finished] loading QEMU firmware config module May 8 00:46:58.153106 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:46:58.166857 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:46:58.184582 ignition[695]: parsing config with SHA512: c3dba0e0f08f245513a7b123969bf0bedea28bf1cf509fc5af952cda94de45a08803aebd9dc9780897028fa46ca00150b4b9ad2b10a8da7314fb7c4002a6e7b0 May 8 00:46:58.187732 systemd-networkd[785]: lo: Link UP May 8 00:46:58.187739 systemd-networkd[785]: lo: Gained carrier May 8 00:46:58.189237 systemd-networkd[785]: Enumeration completed May 8 00:46:58.189311 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:46:58.189621 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:46:58.189625 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:46:58.191397 systemd-networkd[785]: eth0: Link UP May 8 00:46:58.191401 systemd-networkd[785]: eth0: Gained carrier May 8 00:46:58.191408 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:46:58.343751 systemd[1]: Reached target network.target - Network. May 8 00:46:58.357128 unknown[695]: fetched base config from "system" May 8 00:46:58.357140 unknown[695]: fetched user config from "qemu" May 8 00:46:58.357767 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.152/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:46:58.360362 ignition[695]: fetch-offline: fetch-offline passed May 8 00:46:58.361229 ignition[695]: Ignition finished successfully May 8 00:46:58.364104 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:46:58.364715 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:46:58.372921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:46:58.386937 ignition[789]: Ignition 2.19.0 May 8 00:46:58.386948 ignition[789]: Stage: kargs May 8 00:46:58.387110 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 8 00:46:58.387121 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:58.390841 ignition[789]: kargs: kargs passed May 8 00:46:58.390889 ignition[789]: Ignition finished successfully May 8 00:46:58.395070 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:46:58.407847 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:46:58.419400 ignition[798]: Ignition 2.19.0 May 8 00:46:58.419410 ignition[798]: Stage: disks May 8 00:46:58.419562 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 8 00:46:58.419572 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:58.423493 ignition[798]: disks: disks passed May 8 00:46:58.423539 ignition[798]: Ignition finished successfully May 8 00:46:58.426684 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:46:58.428839 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:46:58.429263 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:46:58.429595 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:46:58.430111 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:46:58.430447 systemd[1]: Reached target basic.target - Basic System. May 8 00:46:58.446840 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:46:58.461155 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:46:58.469677 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:46:58.482836 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:46:58.568741 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:46:58.568820 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:46:58.571085 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:46:58.579786 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:46:58.581398 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:46:58.582523 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:46:58.582560 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:46:58.591018 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 8 00:46:58.582585 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:46:58.596705 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:46:58.596729 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:46:58.596740 kernel: BTRFS info (device vda6): using free space tree May 8 00:46:58.596756 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:46:58.590014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:46:58.591677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:46:58.598346 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:46:58.627444 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:46:58.631108 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 8 00:46:58.634694 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:46:58.638259 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:46:58.717550 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:46:58.724876 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:46:58.726581 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:46:58.732736 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:46:58.748825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:46:59.014456 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:46:59.029327 ignition[931]: INFO : Ignition 2.19.0 May 8 00:46:59.029327 ignition[931]: INFO : Stage: mount May 8 00:46:59.031145 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:46:59.031145 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:59.031145 ignition[931]: INFO : mount: mount passed May 8 00:46:59.031145 ignition[931]: INFO : Ignition finished successfully May 8 00:46:59.032622 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:46:59.040828 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:46:59.049708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:46:59.061261 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) May 8 00:46:59.061289 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:46:59.061300 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:46:59.062766 kernel: BTRFS info (device vda6): using free space tree May 8 00:46:59.065738 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:46:59.066621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:46:59.087489 ignition[961]: INFO : Ignition 2.19.0 May 8 00:46:59.087489 ignition[961]: INFO : Stage: files May 8 00:46:59.089353 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:46:59.089353 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:59.089353 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 8 00:46:59.089353 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:46:59.089353 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:46:59.095853 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:46:59.095853 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:46:59.095853 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:46:59.095853 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:46:59.095853 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:46:59.092121 unknown[961]: wrote ssh authorized keys file for user: core May 8 00:46:59.136835 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:46:59.283646 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:46:59.283646 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:46:59.287647 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 00:46:59.770123 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:46:59.981845 systemd-networkd[785]: eth0: Gained IPv6LL May 8 00:47:00.388807 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:47:00.388807 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:47:00.393394 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:47:00.415147 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:47:00.419414 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:47:00.421183 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:47:00.421183 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 8 00:47:00.421183 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:47:00.421183 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:47:00.421183 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:47:00.421183 ignition[961]: INFO : files: files passed May 8 00:47:00.421183 ignition[961]: INFO : Ignition finished successfully May 8 00:47:00.422307 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:47:00.433831 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:47:00.436055 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:47:00.438372 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:47:00.438477 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:47:00.445116 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:47:00.447106 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:47:00.448869 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:47:00.451660 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:47:00.449957 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:47:00.451865 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:47:00.463897 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:47:00.487606 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:47:00.487743 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:47:00.490169 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:47:00.492229 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:47:00.494207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:47:00.496813 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:47:00.526079 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:47:00.535854 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:47:00.544415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:47:00.545770 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:47:00.548024 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:47:00.550066 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:47:00.550173 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:47:00.552752 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:47:00.554841 systemd[1]: Stopped target basic.target - Basic System. May 8 00:47:00.557326 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:47:00.559822 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:47:00.562268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:47:00.564895 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:47:00.567517 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:47:00.570365 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:47:00.572636 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:47:00.575065 systemd[1]: Stopped target swap.target - Swaps. May 8 00:47:00.576829 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:47:00.576947 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:47:00.579120 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:47:00.580778 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:47:00.582984 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:47:00.583095 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:47:00.585255 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:47:00.585359 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:47:00.587598 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:47:00.587705 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:47:00.589753 systemd[1]: Stopped target paths.target - Path Units. May 8 00:47:00.591506 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:47:00.594770 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:47:00.596141 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:47:00.598056 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:47:00.600111 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:47:00.600201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:47:00.601952 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:47:00.602039 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:47:00.604028 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:47:00.604134 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:47:00.606705 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:47:00.606836 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:47:00.624892 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:47:00.625924 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:47:00.626079 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:47:00.628994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:47:00.630018 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:47:00.630210 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:47:00.632532 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:47:00.637688 ignition[1016]: INFO : Ignition 2.19.0 May 8 00:47:00.637688 ignition[1016]: INFO : Stage: umount May 8 00:47:00.632675 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:47:00.642998 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:47:00.642998 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:47:00.642998 ignition[1016]: INFO : umount: umount passed May 8 00:47:00.642998 ignition[1016]: INFO : Ignition finished successfully May 8 00:47:00.638044 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:47:00.638157 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:47:00.640799 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:47:00.640931 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:47:00.644290 systemd[1]: Stopped target network.target - Network. May 8 00:47:00.646948 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:47:00.647009 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:47:00.648939 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:47:00.648985 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:47:00.651131 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:47:00.651176 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:47:00.653477 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:47:00.653524 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:47:00.654021 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:47:00.654577 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:47:00.658131 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:47:00.662823 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:47:00.662957 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:47:00.665374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:47:00.665435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:47:00.666776 systemd-networkd[785]: eth0: DHCPv6 lease lost May 8 00:47:00.668476 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:47:00.668624 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:47:00.670095 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:47:00.670139 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:47:00.676812 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:47:00.677965 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:47:00.678021 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:47:00.680493 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:47:00.680540 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:47:00.682655 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:47:00.682705 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:47:00.684928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:47:00.694868 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:47:00.695017 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:47:00.706698 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:47:00.706900 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:47:00.708528 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:47:00.708578 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:47:00.710408 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:47:00.710451 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:47:00.712376 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:47:00.712425 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:47:00.714783 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:47:00.714830 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:47:00.716789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:47:00.716846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:47:00.724898 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:47:00.726014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:47:00.726086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:47:00.728259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:47:00.728307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:47:00.731738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:47:00.731865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:47:01.131455 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:47:01.131642 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:47:01.134304 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:47:01.135513 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:47:01.135593 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:47:01.151989 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:47:01.159819 systemd[1]: Switching root. May 8 00:47:01.198617 systemd-journald[192]: Journal stopped May 8 00:47:02.678571 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 8 00:47:02.678649 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:47:02.678668 kernel: SELinux: policy capability open_perms=1 May 8 00:47:02.678679 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:47:02.678690 kernel: SELinux: policy capability always_check_network=0 May 8 00:47:02.678701 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:47:02.678714 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:47:02.678736 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:47:02.678753 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:47:02.678774 kernel: audit: type=1403 audit(1746665221.907:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:47:02.678786 systemd[1]: Successfully loaded SELinux policy in 40.092ms. May 8 00:47:02.678819 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.165ms. May 8 00:47:02.678832 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:47:02.678844 systemd[1]: Detected virtualization kvm. May 8 00:47:02.678856 systemd[1]: Detected architecture x86-64. May 8 00:47:02.678868 systemd[1]: Detected first boot. May 8 00:47:02.678880 systemd[1]: Initializing machine ID from VM UUID. May 8 00:47:02.678894 zram_generator::config[1060]: No configuration found. May 8 00:47:02.678907 systemd[1]: Populated /etc with preset unit settings. May 8 00:47:02.678919 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:47:02.678931 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:47:02.678943 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:47:02.678955 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:47:02.678967 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:47:02.678983 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:47:02.678997 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:47:02.679009 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:47:02.679022 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:47:02.679035 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:47:02.679046 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:47:02.679058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:47:02.679071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:47:02.679083 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:47:02.679094 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:47:02.679113 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:47:02.679125 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:47:02.679137 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:47:02.679149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:47:02.679160 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:47:02.679172 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:47:02.679185 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:47:02.679199 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:47:02.679211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:47:02.679223 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:47:02.679235 systemd[1]: Reached target slices.target - Slice Units. May 8 00:47:02.679247 systemd[1]: Reached target swap.target - Swaps. May 8 00:47:02.679258 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:47:02.679270 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:47:02.679283 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:47:02.679295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:47:02.679307 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:47:02.679322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:47:02.679334 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:47:02.679345 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:47:02.679357 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:47:02.679369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:02.679381 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:47:02.679393 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:47:02.679404 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:47:02.679419 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:47:02.679431 systemd[1]: Reached target machines.target - Containers. May 8 00:47:02.679443 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:47:02.679455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:47:02.679467 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:47:02.679479 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:47:02.679491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:47:02.679503 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:47:02.679515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:47:02.679529 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:47:02.679542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:47:02.679554 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:47:02.679566 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:47:02.679578 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:47:02.679589 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:47:02.679601 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:47:02.679613 kernel: loop: module loaded May 8 00:47:02.679626 kernel: fuse: init (API version 7.39) May 8 00:47:02.679638 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:47:02.679649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:47:02.679661 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:47:02.679673 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:47:02.679707 systemd-journald[1123]: Collecting audit messages is disabled. May 8 00:47:02.679740 systemd-journald[1123]: Journal started May 8 00:47:02.679765 systemd-journald[1123]: Runtime Journal (/run/log/journal/fd5ed05e5ee049439ac3d1d35ee368e2) is 6.0M, max 48.3M, 42.2M free. May 8 00:47:02.468372 systemd[1]: Queued start job for default target multi-user.target. May 8 00:47:02.485327 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:47:02.485761 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:47:02.697732 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:47:02.698746 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:47:02.700257 systemd[1]: Stopped verity-setup.service. May 8 00:47:02.702741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:02.705743 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:47:02.705774 kernel: ACPI: bus type drm_connector registered May 8 00:47:02.708748 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:47:02.710101 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:47:02.711869 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:47:02.713198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:47:02.714417 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:47:02.715922 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:47:02.717310 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:47:02.718894 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:47:02.719071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:47:02.720557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:47:02.720747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:47:02.722190 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:47:02.722359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:47:02.723879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:47:02.724051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:47:02.725667 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:47:02.725858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:47:02.727340 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:47:02.727547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:47:02.729030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:47:02.730479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:47:02.732077 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:47:02.748817 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:47:02.759827 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:47:02.765273 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:47:02.766498 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:47:02.766539 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:47:02.768704 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:47:02.771215 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:47:02.773429 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:47:02.774594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:47:02.777316 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:47:02.780962 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:47:02.782241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:47:02.787410 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:47:02.789223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:47:02.793558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:47:02.796753 systemd-journald[1123]: Time spent on flushing to /var/log/journal/fd5ed05e5ee049439ac3d1d35ee368e2 is 24.824ms for 991 entries. May 8 00:47:02.796753 systemd-journald[1123]: System Journal (/var/log/journal/fd5ed05e5ee049439ac3d1d35ee368e2) is 8.0M, max 195.6M, 187.6M free. May 8 00:47:03.018820 systemd-journald[1123]: Received client request to flush runtime journal. May 8 00:47:03.018922 kernel: loop0: detected capacity change from 0 to 142488 May 8 00:47:03.018950 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:47:03.018969 kernel: loop1: detected capacity change from 0 to 205544 May 8 00:47:02.798970 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:47:02.801998 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:47:02.803500 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:47:02.804979 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:47:02.806747 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:47:02.808233 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:47:02.828070 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:47:02.831304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:47:02.848998 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:47:02.855713 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:47:02.857334 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:47:02.860976 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:47:02.872125 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:47:02.888301 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:47:03.018317 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:47:03.022047 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:47:03.070453 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 8 00:47:03.070472 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 8 00:47:03.077491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:47:03.104767 kernel: loop2: detected capacity change from 0 to 140768 May 8 00:47:03.154750 kernel: loop3: detected capacity change from 0 to 142488 May 8 00:47:03.171786 kernel: loop4: detected capacity change from 0 to 205544 May 8 00:47:03.181748 kernel: loop5: detected capacity change from 0 to 140768 May 8 00:47:03.182680 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:47:03.184406 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:47:03.212879 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:47:03.213492 (sd-merge)[1197]: Merged extensions into '/usr'. May 8 00:47:03.218884 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:47:03.218902 systemd[1]: Reloading... May 8 00:47:03.306746 zram_generator::config[1223]: No configuration found. May 8 00:47:03.427910 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:47:03.449633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:47:03.500003 systemd[1]: Reloading finished in 280 ms. May 8 00:47:03.565261 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:47:03.567004 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:47:03.598998 systemd[1]: Starting ensure-sysext.service... May 8 00:47:03.601209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:47:03.618792 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... May 8 00:47:03.618806 systemd[1]: Reloading... May 8 00:47:03.678855 zram_generator::config[1287]: No configuration found. May 8 00:47:03.726996 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:47:03.727367 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:47:03.728370 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:47:03.728666 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 8 00:47:03.728796 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 8 00:47:03.732135 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:47:03.732149 systemd-tmpfiles[1262]: Skipping /boot May 8 00:47:03.742654 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:47:03.742668 systemd-tmpfiles[1262]: Skipping /boot May 8 00:47:03.848319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:47:03.898693 systemd[1]: Reloading finished in 279 ms. May 8 00:47:03.919405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:47:03.967815 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:47:03.971604 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:47:03.974841 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:47:03.982122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:47:03.984687 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:47:03.991572 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:47:03.993347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:03.993508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:47:03.995853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:47:03.999999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:47:04.003125 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:47:04.004297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:47:04.004536 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:04.005768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:47:04.006281 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:47:04.016932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:47:04.017129 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:47:04.020447 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:47:04.020649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:47:04.027133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:04.027358 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:47:04.032966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:47:04.054025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:47:04.060093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:47:04.061290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:47:04.061497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:04.063063 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:47:04.065469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:47:04.065674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:47:04.067520 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:47:04.069493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:47:04.069680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:47:04.071689 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:47:04.071920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:47:04.081741 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:47:04.089062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:04.089272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:47:04.090646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:47:04.102388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:47:04.104767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:47:04.109553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:47:04.110706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:47:04.110866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:47:04.112016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:47:04.112208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:47:04.113941 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:47:04.114125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:47:04.115824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:47:04.116019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:47:04.117690 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:47:04.117892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:47:04.121556 systemd[1]: Finished ensure-sysext.service. May 8 00:47:04.135575 augenrules[1365]: No rules May 8 00:47:04.136481 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:47:04.139697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:47:04.139806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:47:04.153890 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:47:04.157871 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:47:04.159408 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:47:04.206168 systemd-resolved[1332]: Positive Trust Anchors: May 8 00:47:04.206186 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:47:04.206218 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:47:04.209834 systemd-resolved[1332]: Defaulting to hostname 'linux'. May 8 00:47:04.211832 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:47:04.213031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:47:04.216651 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:47:04.217985 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:47:04.276390 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:47:04.290888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:47:04.294215 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:47:04.313653 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:47:04.323187 systemd-udevd[1385]: Using default interface naming scheme 'v255'. May 8 00:47:04.340273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:47:04.352061 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:47:04.390020 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:47:04.391767 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1393) May 8 00:47:04.408518 systemd-networkd[1395]: lo: Link UP May 8 00:47:04.408894 systemd-networkd[1395]: lo: Gained carrier May 8 00:47:04.411550 systemd-networkd[1395]: Enumeration completed May 8 00:47:04.411707 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:47:04.412418 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:47:04.412478 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:47:04.414408 systemd-networkd[1395]: eth0: Link UP May 8 00:47:04.414474 systemd-networkd[1395]: eth0: Gained carrier May 8 00:47:04.414523 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:47:04.417003 systemd[1]: Reached target network.target - Network. May 8 00:47:04.423921 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:47:04.488977 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:47:04.493071 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:47:04.497935 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.152/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:47:04.499802 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:47:04.499960 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. May 8 00:47:05.451542 systemd-resolved[1332]: Clock change detected. Flushing caches. May 8 00:47:05.451750 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:47:05.451930 systemd-timesyncd[1381]: Initial clock synchronization to Thu 2025-05-08 00:47:05.450613 UTC. May 8 00:47:05.468545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 8 00:47:05.482489 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:47:05.482576 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:47:05.489914 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:47:05.490074 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:47:05.490258 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:47:05.491827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:47:05.551255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:47:05.551477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:47:05.563726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:47:05.566057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:47:05.572540 kernel: ACPI: button: Power Button [PWRF] May 8 00:47:05.583553 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:47:05.633821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:47:05.666846 kernel: kvm_amd: TSC scaling supported May 8 00:47:05.666894 kernel: kvm_amd: Nested Virtualization enabled May 8 00:47:05.666907 kernel: kvm_amd: Nested Paging enabled May 8 00:47:05.667845 kernel: kvm_amd: LBR virtualization supported May 8 00:47:05.667895 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:47:05.668879 kernel: kvm_amd: Virtual GIF supported May 8 00:47:05.722565 kernel: EDAC MC: Ver: 3.0.0 May 8 00:47:05.766706 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:47:05.783858 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:47:05.792976 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:47:05.825799 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:47:05.852019 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:47:05.853191 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:47:05.854405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:47:05.855715 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:47:05.857185 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:47:05.858387 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:47:05.859672 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:47:05.861065 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:47:05.861112 systemd[1]: Reached target paths.target - Path Units. May 8 00:47:05.862100 systemd[1]: Reached target timers.target - Timer Units. May 8 00:47:05.864008 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:47:05.866847 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:47:05.915323 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:47:05.917774 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:47:05.919469 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:47:05.920662 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:47:05.921693 systemd[1]: Reached target basic.target - Basic System. May 8 00:47:05.922690 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:47:05.922717 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:47:05.923795 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:47:05.926398 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:47:05.929625 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:47:05.930619 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:47:05.936692 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:47:05.984877 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:47:05.986616 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:47:05.990680 jq[1443]: false May 8 00:47:05.992619 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:47:05.995396 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:47:05.998407 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:47:06.002498 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:47:06.004611 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:47:06.005184 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:47:06.006597 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:47:06.008725 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:47:06.011058 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:47:06.016563 extend-filesystems[1444]: Found loop3 May 8 00:47:06.016563 extend-filesystems[1444]: Found loop4 May 8 00:47:06.016563 extend-filesystems[1444]: Found loop5 May 8 00:47:06.016563 extend-filesystems[1444]: Found sr0 May 8 00:47:06.016563 extend-filesystems[1444]: Found vda May 8 00:47:06.016563 extend-filesystems[1444]: Found vda1 May 8 00:47:06.016563 extend-filesystems[1444]: Found vda2 May 8 00:47:06.016563 extend-filesystems[1444]: Found vda3 May 8 00:47:06.016563 extend-filesystems[1444]: Found usr May 8 00:47:06.016563 extend-filesystems[1444]: Found vda4 May 8 00:47:06.016563 extend-filesystems[1444]: Found vda6 May 8 00:47:06.016563 extend-filesystems[1444]: Found vda7 May 8 00:47:06.016563 extend-filesystems[1444]: Found vda9 May 8 00:47:06.016563 extend-filesystems[1444]: Checking size of /dev/vda9 May 8 00:47:06.015993 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:47:06.045311 update_engine[1452]: I20250508 00:47:06.031209 1452 main.cc:92] Flatcar Update Engine starting May 8 00:47:06.045311 update_engine[1452]: I20250508 00:47:06.037921 1452 update_check_scheduler.cc:74] Next update check in 5m1s May 8 00:47:06.032044 dbus-daemon[1442]: [system] SELinux support is enabled May 8 00:47:06.051112 jq[1453]: true May 8 00:47:06.016208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:47:06.020335 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:47:06.020597 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:47:06.051611 jq[1461]: true May 8 00:47:06.032579 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:47:06.051004 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:47:06.051223 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:47:06.052327 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:47:06.056684 extend-filesystems[1444]: Resized partition /dev/vda9 May 8 00:47:06.061767 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:47:06.061808 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:47:06.062760 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) May 8 00:47:06.063202 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:47:06.064350 tar[1458]: linux-amd64/helm May 8 00:47:06.063220 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:47:06.066876 systemd[1]: Started update-engine.service - Update Engine. May 8 00:47:06.072560 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1403) May 8 00:47:06.075761 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:47:06.196304 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:47:06.196331 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:47:06.197342 systemd-logind[1450]: New seat seat0. May 8 00:47:06.198230 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:47:06.267787 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:47:06.590131 tar[1458]: linux-amd64/LICENSE May 8 00:47:06.590131 tar[1458]: linux-amd64/README.md May 8 00:47:06.599117 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:47:06.608868 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:47:06.627304 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:47:06.651652 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:47:06.731860 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:47:06.739188 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:47:06.739425 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:47:06.742233 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:47:06.944560 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:47:06.962675 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:47:06.974865 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:47:08.128046 containerd[1468]: time="2025-05-08T00:47:08.127725015Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:47:06.976994 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:47:06.978233 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:47:07.395758 systemd-networkd[1395]: eth0: Gained IPv6LL May 8 00:47:07.399183 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:47:07.401107 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:47:07.416803 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:47:08.134733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:08.137252 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:47:08.157435 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:47:08.157692 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:47:08.160160 containerd[1468]: time="2025-05-08T00:47:08.160086086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:08.161857 containerd[1468]: time="2025-05-08T00:47:08.161811953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:08.161857 containerd[1468]: time="2025-05-08T00:47:08.161851577Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:47:08.161908 containerd[1468]: time="2025-05-08T00:47:08.161872406Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:47:08.162211 containerd[1468]: time="2025-05-08T00:47:08.162178210Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:47:08.162245 containerd[1468]: time="2025-05-08T00:47:08.162230167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:47:08.162362 containerd[1468]: time="2025-05-08T00:47:08.162333782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:08.162362 containerd[1468]: time="2025-05-08T00:47:08.162359039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:08.162463 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:47:08.162715 containerd[1468]: time="2025-05-08T00:47:08.162664081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:08.162715 containerd[1468]: time="2025-05-08T00:47:08.162690982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:47:08.162715 containerd[1468]: time="2025-05-08T00:47:08.162711029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:08.162789 containerd[1468]: time="2025-05-08T00:47:08.162725066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:47:08.162896 containerd[1468]: time="2025-05-08T00:47:08.162865128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:08.164912 containerd[1468]: time="2025-05-08T00:47:08.164868185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:08.165097 containerd[1468]: time="2025-05-08T00:47:08.165064985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:08.165097 containerd[1468]: time="2025-05-08T00:47:08.165090923Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:47:08.165278 containerd[1468]: time="2025-05-08T00:47:08.165248849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:47:08.165363 containerd[1468]: time="2025-05-08T00:47:08.165337345Z" level=info msg="metadata content store policy set" policy=shared May 8 00:47:08.201413 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:47:08.201413 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:47:08.201413 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:47:08.207936 extend-filesystems[1444]: Resized filesystem in /dev/vda9 May 8 00:47:08.202388 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:47:08.202642 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:47:08.219008 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:47:08.469599 bash[1496]: Updated "/home/core/.ssh/authorized_keys" May 8 00:47:08.471740 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:47:08.474095 containerd[1468]: time="2025-05-08T00:47:08.474026298Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:47:08.474095 containerd[1468]: time="2025-05-08T00:47:08.474112750Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:47:08.474265 containerd[1468]: time="2025-05-08T00:47:08.474129992Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:47:08.474265 containerd[1468]: time="2025-05-08T00:47:08.474145161Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:47:08.474265 containerd[1468]: time="2025-05-08T00:47:08.474158816Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:47:08.474157 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:47:08.474594 containerd[1468]: time="2025-05-08T00:47:08.474319237Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:47:08.474716 containerd[1468]: time="2025-05-08T00:47:08.474663954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:47:08.474892 containerd[1468]: time="2025-05-08T00:47:08.474862196Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:47:08.474892 containerd[1468]: time="2025-05-08T00:47:08.474884227Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:47:08.474944 containerd[1468]: time="2025-05-08T00:47:08.474898814Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:47:08.474944 containerd[1468]: time="2025-05-08T00:47:08.474913983Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:47:08.474944 containerd[1468]: time="2025-05-08T00:47:08.474927648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:47:08.474944 containerd[1468]: time="2025-05-08T00:47:08.474940803Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:47:08.475047 containerd[1468]: time="2025-05-08T00:47:08.474956232Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:47:08.475047 containerd[1468]: time="2025-05-08T00:47:08.474971661Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:47:08.475047 containerd[1468]: time="2025-05-08T00:47:08.474984635Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:47:08.475047 containerd[1468]: time="2025-05-08T00:47:08.474996357Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:47:08.475047 containerd[1468]: time="2025-05-08T00:47:08.475008019Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:47:08.475047 containerd[1468]: time="2025-05-08T00:47:08.475030672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475047 containerd[1468]: time="2025-05-08T00:47:08.475049367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475063012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475075496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475087258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475099471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475111223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475123355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475134907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475150166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475160876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475173840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475186855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475195 containerd[1468]: time="2025-05-08T00:47:08.475201893Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475221109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475232771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475242940Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475287684Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475305677Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475317500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475328851Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475338158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475353507Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475363155Z" level=info msg="NRI interface is disabled by configuration." May 8 00:47:08.475420 containerd[1468]: time="2025-05-08T00:47:08.475372613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:47:08.476989 containerd[1468]: time="2025-05-08T00:47:08.476893846Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:47:08.476989 containerd[1468]: time="2025-05-08T00:47:08.476989175Z" level=info msg="Connect containerd service" May 8 00:47:08.477249 containerd[1468]: time="2025-05-08T00:47:08.477058034Z" level=info msg="using legacy CRI server" May 8 00:47:08.477249 containerd[1468]: time="2025-05-08T00:47:08.477068324Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:47:08.477249 containerd[1468]: time="2025-05-08T00:47:08.477182117Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:47:08.477954 containerd[1468]: time="2025-05-08T00:47:08.477916044Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:47:08.478169 containerd[1468]: time="2025-05-08T00:47:08.478116520Z" level=info msg="Start subscribing containerd event" May 8 00:47:08.478344 containerd[1468]: time="2025-05-08T00:47:08.478195117Z" level=info msg="Start recovering state" May 8 00:47:08.478344 containerd[1468]: time="2025-05-08T00:47:08.478285687Z" level=info msg="Start event monitor" May 8 00:47:08.478344 containerd[1468]: time="2025-05-08T00:47:08.478310924Z" level=info msg="Start snapshots syncer" May 8 00:47:08.478344 containerd[1468]: time="2025-05-08T00:47:08.478323688Z" level=info msg="Start cni network conf syncer for default" May 8 00:47:08.478344 containerd[1468]: time="2025-05-08T00:47:08.478332705Z" level=info msg="Start streaming server" May 8 00:47:08.478449 containerd[1468]: time="2025-05-08T00:47:08.478370837Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:47:08.478449 containerd[1468]: time="2025-05-08T00:47:08.478423395Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:47:08.479156 containerd[1468]: time="2025-05-08T00:47:08.478507573Z" level=info msg="containerd successfully booted in 1.261175s" May 8 00:47:08.478639 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:47:09.540397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:09.551798 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:47:09.552147 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:47:09.553659 systemd[1]: Startup finished in 727ms (kernel) + 6.205s (initrd) + 6.735s (userspace) = 13.669s. May 8 00:47:09.677873 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:47:09.679100 systemd[1]: Started sshd@0-10.0.0.152:22-10.0.0.1:43854.service - OpenSSH per-connection server daemon (10.0.0.1:43854). May 8 00:47:09.734116 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 43854 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:09.736018 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:09.745172 systemd-logind[1450]: New session 1 of user core. May 8 00:47:09.746440 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:47:09.757769 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:47:09.800661 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:47:09.808002 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:47:09.812170 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:09.957693 systemd[1569]: Queued start job for default target default.target. May 8 00:47:09.972033 systemd[1569]: Created slice app.slice - User Application Slice. May 8 00:47:09.972059 systemd[1569]: Reached target paths.target - Paths. May 8 00:47:09.972079 systemd[1569]: Reached target timers.target - Timers. May 8 00:47:09.973970 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:47:09.988790 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:47:09.988948 systemd[1569]: Reached target sockets.target - Sockets. May 8 00:47:09.988964 systemd[1569]: Reached target basic.target - Basic System. May 8 00:47:09.989013 systemd[1569]: Reached target default.target - Main User Target. May 8 00:47:09.989055 systemd[1569]: Startup finished in 168ms. May 8 00:47:09.989218 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:47:09.990854 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:47:10.052849 systemd[1]: Started sshd@1-10.0.0.152:22-10.0.0.1:43864.service - OpenSSH per-connection server daemon (10.0.0.1:43864). May 8 00:47:10.155207 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 43864 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:10.157650 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:10.162944 systemd-logind[1450]: New session 2 of user core. May 8 00:47:10.178739 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:47:10.234266 sshd[1581]: pam_unix(sshd:session): session closed for user core May 8 00:47:10.243402 systemd[1]: sshd@1-10.0.0.152:22-10.0.0.1:43864.service: Deactivated successfully. May 8 00:47:10.245118 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:47:10.246779 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. May 8 00:47:10.252987 systemd[1]: Started sshd@2-10.0.0.152:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). May 8 00:47:10.254754 systemd-logind[1450]: Removed session 2. May 8 00:47:10.290003 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:10.291902 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:10.296236 systemd-logind[1450]: New session 3 of user core. May 8 00:47:10.312754 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:47:10.313003 kubelet[1554]: E0508 00:47:10.312927 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:47:10.317052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:47:10.317294 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:47:10.317675 systemd[1]: kubelet.service: Consumed 1.941s CPU time. May 8 00:47:10.365516 sshd[1588]: pam_unix(sshd:session): session closed for user core May 8 00:47:10.376221 systemd[1]: sshd@2-10.0.0.152:22-10.0.0.1:43874.service: Deactivated successfully. May 8 00:47:10.377723 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:47:10.378964 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. May 8 00:47:10.380048 systemd[1]: Started sshd@3-10.0.0.152:22-10.0.0.1:43888.service - OpenSSH per-connection server daemon (10.0.0.1:43888). May 8 00:47:10.380794 systemd-logind[1450]: Removed session 3. May 8 00:47:10.421734 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 43888 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:10.423467 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:10.427544 systemd-logind[1450]: New session 4 of user core. May 8 00:47:10.437660 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:47:10.491466 sshd[1596]: pam_unix(sshd:session): session closed for user core May 8 00:47:10.498281 systemd[1]: sshd@3-10.0.0.152:22-10.0.0.1:43888.service: Deactivated successfully. May 8 00:47:10.499919 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:47:10.501394 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. May 8 00:47:10.502536 systemd[1]: Started sshd@4-10.0.0.152:22-10.0.0.1:43890.service - OpenSSH per-connection server daemon (10.0.0.1:43890). May 8 00:47:10.503335 systemd-logind[1450]: Removed session 4. May 8 00:47:10.541135 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 43890 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:10.542864 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:10.546582 systemd-logind[1450]: New session 5 of user core. May 8 00:47:10.556637 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:47:10.614762 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:47:10.615099 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:47:10.632873 sudo[1606]: pam_unix(sudo:session): session closed for user root May 8 00:47:10.635010 sshd[1603]: pam_unix(sshd:session): session closed for user core May 8 00:47:10.646325 systemd[1]: sshd@4-10.0.0.152:22-10.0.0.1:43890.service: Deactivated successfully. May 8 00:47:10.648039 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:47:10.649621 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. May 8 00:47:10.658748 systemd[1]: Started sshd@5-10.0.0.152:22-10.0.0.1:43900.service - OpenSSH per-connection server daemon (10.0.0.1:43900). May 8 00:47:10.659748 systemd-logind[1450]: Removed session 5. May 8 00:47:10.693117 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 43900 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:10.694653 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:10.698672 systemd-logind[1450]: New session 6 of user core. May 8 00:47:10.709633 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:47:10.763391 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:47:10.763816 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:47:10.768006 sudo[1615]: pam_unix(sudo:session): session closed for user root May 8 00:47:10.775492 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:47:10.775923 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:47:10.797801 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:47:10.799367 auditctl[1618]: No rules May 8 00:47:10.799870 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:47:10.800146 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:47:10.803221 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:47:10.837728 augenrules[1636]: No rules May 8 00:47:10.839767 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:47:10.841344 sudo[1614]: pam_unix(sudo:session): session closed for user root May 8 00:47:10.843450 sshd[1611]: pam_unix(sshd:session): session closed for user core May 8 00:47:10.860852 systemd[1]: sshd@5-10.0.0.152:22-10.0.0.1:43900.service: Deactivated successfully. May 8 00:47:10.862908 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:47:10.864353 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. May 8 00:47:10.872925 systemd[1]: Started sshd@6-10.0.0.152:22-10.0.0.1:43902.service - OpenSSH per-connection server daemon (10.0.0.1:43902). May 8 00:47:10.874112 systemd-logind[1450]: Removed session 6. May 8 00:47:10.913991 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 43902 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:10.919257 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:10.926337 systemd-logind[1450]: New session 7 of user core. May 8 00:47:10.936660 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:47:10.991182 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:47:10.991534 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:47:11.564728 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:47:11.564902 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:47:12.188403 dockerd[1665]: time="2025-05-08T00:47:12.188325361Z" level=info msg="Starting up" May 8 00:47:15.087756 dockerd[1665]: time="2025-05-08T00:47:15.087668901Z" level=info msg="Loading containers: start." May 8 00:47:15.603561 kernel: Initializing XFRM netlink socket May 8 00:47:15.689893 systemd-networkd[1395]: docker0: Link UP May 8 00:47:15.823243 dockerd[1665]: time="2025-05-08T00:47:15.823184518Z" level=info msg="Loading containers: done." May 8 00:47:15.837327 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2012495515-merged.mount: Deactivated successfully. May 8 00:47:15.907059 dockerd[1665]: time="2025-05-08T00:47:15.906909392Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:47:15.907253 dockerd[1665]: time="2025-05-08T00:47:15.907158439Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:47:15.907384 dockerd[1665]: time="2025-05-08T00:47:15.907356000Z" level=info msg="Daemon has completed initialization" May 8 00:47:16.094748 dockerd[1665]: time="2025-05-08T00:47:16.094633829Z" level=info msg="API listen on /run/docker.sock" May 8 00:47:16.094962 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:47:16.834757 containerd[1468]: time="2025-05-08T00:47:16.834704871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:47:17.927633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127035644.mount: Deactivated successfully. May 8 00:47:19.812973 containerd[1468]: time="2025-05-08T00:47:19.812905354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:19.813760 containerd[1468]: time="2025-05-08T00:47:19.813696217Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 8 00:47:19.814982 containerd[1468]: time="2025-05-08T00:47:19.814946993Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:19.817913 containerd[1468]: time="2025-05-08T00:47:19.817858364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:19.819109 containerd[1468]: time="2025-05-08T00:47:19.819064376Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.984304612s" May 8 00:47:19.819160 containerd[1468]: time="2025-05-08T00:47:19.819113769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 8 00:47:19.820850 containerd[1468]: time="2025-05-08T00:47:19.820804160Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:47:20.393026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:47:20.402749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:20.711687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:20.715882 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:47:20.839089 kubelet[1878]: E0508 00:47:20.839030 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:47:20.846574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:47:20.846782 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:47:23.241673 containerd[1468]: time="2025-05-08T00:47:23.241029565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:23.242611 containerd[1468]: time="2025-05-08T00:47:23.242450140Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 8 00:47:23.243697 containerd[1468]: time="2025-05-08T00:47:23.243655390Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:23.248218 containerd[1468]: time="2025-05-08T00:47:23.248151383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:23.249227 containerd[1468]: time="2025-05-08T00:47:23.249189871Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 3.428349063s" May 8 00:47:23.249227 containerd[1468]: time="2025-05-08T00:47:23.249225208Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 8 00:47:23.249848 containerd[1468]: time="2025-05-08T00:47:23.249814112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:47:25.072674 containerd[1468]: time="2025-05-08T00:47:25.072592513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:25.074105 containerd[1468]: time="2025-05-08T00:47:25.074036432Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 8 00:47:25.075328 containerd[1468]: time="2025-05-08T00:47:25.075290194Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:25.078258 containerd[1468]: time="2025-05-08T00:47:25.078213547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:25.079570 containerd[1468]: time="2025-05-08T00:47:25.079513415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.829662865s" May 8 00:47:25.079570 containerd[1468]: time="2025-05-08T00:47:25.079566384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 8 00:47:25.080291 containerd[1468]: time="2025-05-08T00:47:25.080254345Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:47:26.302050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084600866.mount: Deactivated successfully. May 8 00:47:28.091159 containerd[1468]: time="2025-05-08T00:47:28.091075475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:28.129830 containerd[1468]: time="2025-05-08T00:47:28.129736011Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 8 00:47:28.166364 containerd[1468]: time="2025-05-08T00:47:28.166278184Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:28.229852 containerd[1468]: time="2025-05-08T00:47:28.229787008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:28.230648 containerd[1468]: time="2025-05-08T00:47:28.230602648Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.150301495s" May 8 00:47:28.230692 containerd[1468]: time="2025-05-08T00:47:28.230654455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 8 00:47:28.231311 containerd[1468]: time="2025-05-08T00:47:28.231286310Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:47:30.893188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:47:30.909702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:30.937098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081997329.mount: Deactivated successfully. May 8 00:47:31.048096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:31.052428 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:47:31.166772 kubelet[1915]: E0508 00:47:31.166619 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:47:31.171071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:47:31.171329 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:47:36.043049 containerd[1468]: time="2025-05-08T00:47:36.042982329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:36.044049 containerd[1468]: time="2025-05-08T00:47:36.044017441Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:47:36.045130 containerd[1468]: time="2025-05-08T00:47:36.045097907Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:36.047802 containerd[1468]: time="2025-05-08T00:47:36.047775369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:36.048977 containerd[1468]: time="2025-05-08T00:47:36.048913464Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 7.817594232s" May 8 00:47:36.048977 containerd[1468]: time="2025-05-08T00:47:36.048964149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:47:36.049629 containerd[1468]: time="2025-05-08T00:47:36.049473865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:47:36.736872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740583753.mount: Deactivated successfully. May 8 00:47:36.744952 containerd[1468]: time="2025-05-08T00:47:36.744894997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:36.745607 containerd[1468]: time="2025-05-08T00:47:36.745560435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:47:36.747058 containerd[1468]: time="2025-05-08T00:47:36.747022137Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:36.749440 containerd[1468]: time="2025-05-08T00:47:36.749400959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:36.750113 containerd[1468]: time="2025-05-08T00:47:36.750054444Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 700.551705ms" May 8 00:47:36.750113 containerd[1468]: time="2025-05-08T00:47:36.750102034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:47:36.750666 containerd[1468]: time="2025-05-08T00:47:36.750634853Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:47:38.690707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967123732.mount: Deactivated successfully. May 8 00:47:40.954119 containerd[1468]: time="2025-05-08T00:47:40.954053857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:40.959151 containerd[1468]: time="2025-05-08T00:47:40.959098737Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 8 00:47:40.963650 containerd[1468]: time="2025-05-08T00:47:40.963603164Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:40.966715 containerd[1468]: time="2025-05-08T00:47:40.966674204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:40.968022 containerd[1468]: time="2025-05-08T00:47:40.967985335Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.217150417s" May 8 00:47:40.968107 containerd[1468]: time="2025-05-08T00:47:40.968025743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 8 00:47:41.371942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:47:41.381745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:41.549673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:41.550044 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:47:41.589388 kubelet[2051]: E0508 00:47:41.589311 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:47:41.593571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:47:41.593849 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:47:43.342804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:43.352773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:43.378004 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit session-7.scope)... May 8 00:47:43.378019 systemd[1]: Reloading... May 8 00:47:43.456566 zram_generator::config[2106]: No configuration found. May 8 00:47:44.072300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:47:44.148718 systemd[1]: Reloading finished in 770 ms. May 8 00:47:44.194197 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:47:44.194294 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:47:44.194647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:44.196983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:44.339283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:44.343634 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:47:44.380795 kubelet[2155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:47:44.380795 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:47:44.380795 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:47:44.381785 kubelet[2155]: I0508 00:47:44.381743 2155 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:47:44.554102 kubelet[2155]: I0508 00:47:44.554058 2155 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:47:44.554102 kubelet[2155]: I0508 00:47:44.554084 2155 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:47:44.554291 kubelet[2155]: I0508 00:47:44.554270 2155 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:47:44.575700 kubelet[2155]: I0508 00:47:44.575635 2155 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:47:44.576095 kubelet[2155]: E0508 00:47:44.576064 2155 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.152:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:44.583617 kubelet[2155]: E0508 00:47:44.583503 2155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:47:44.583617 kubelet[2155]: I0508 00:47:44.583548 2155 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:47:44.589477 kubelet[2155]: I0508 00:47:44.589406 2155 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:47:44.590424 kubelet[2155]: I0508 00:47:44.590399 2155 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:47:44.590626 kubelet[2155]: I0508 00:47:44.590589 2155 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:47:44.590816 kubelet[2155]: I0508 00:47:44.590620 2155 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:47:44.590918 kubelet[2155]: I0508 00:47:44.590817 2155 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:47:44.590918 kubelet[2155]: I0508 00:47:44.590825 2155 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:47:44.590958 kubelet[2155]: I0508 00:47:44.590946 2155 state_mem.go:36] "Initialized new in-memory state store" May 8 00:47:44.592274 kubelet[2155]: I0508 00:47:44.592250 2155 kubelet.go:408] "Attempting to sync node with API server" May 8 00:47:44.592274 kubelet[2155]: I0508 00:47:44.592269 2155 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:47:44.592327 kubelet[2155]: I0508 00:47:44.592319 2155 kubelet.go:314] "Adding apiserver pod source" May 8 00:47:44.592349 kubelet[2155]: I0508 00:47:44.592338 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:47:44.597349 kubelet[2155]: I0508 00:47:44.597316 2155 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:47:44.597772 kubelet[2155]: W0508 00:47:44.597696 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:44.597772 kubelet[2155]: E0508 00:47:44.597744 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:44.598709 kubelet[2155]: W0508 00:47:44.598660 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:44.598709 kubelet[2155]: E0508 00:47:44.598698 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:44.601948 kubelet[2155]: I0508 00:47:44.600824 2155 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:47:44.601948 kubelet[2155]: W0508 00:47:44.601294 2155 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:47:44.601948 kubelet[2155]: I0508 00:47:44.601927 2155 server.go:1269] "Started kubelet" May 8 00:47:44.602040 kubelet[2155]: I0508 00:47:44.601992 2155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:47:44.602235 kubelet[2155]: I0508 00:47:44.602212 2155 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:47:44.602674 kubelet[2155]: I0508 00:47:44.602638 2155 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:47:44.605278 kubelet[2155]: I0508 00:47:44.605199 2155 server.go:460] "Adding debug handlers to kubelet server" May 8 00:47:44.605338 kubelet[2155]: I0508 00:47:44.605323 2155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:47:44.606192 kubelet[2155]: I0508 00:47:44.606159 2155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:47:44.606931 kubelet[2155]: I0508 00:47:44.606814 2155 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:47:44.606973 kubelet[2155]: I0508 00:47:44.606945 2155 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:47:44.607296 kubelet[2155]: I0508 00:47:44.607019 2155 reconciler.go:26] "Reconciler: start to sync state" May 8 00:47:44.607296 kubelet[2155]: E0508 00:47:44.605263 2155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.152:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.152:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66cdce1c32b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:47:44.601903797 +0000 UTC m=+0.254274660,LastTimestamp:2025-05-08 00:47:44.601903797 +0000 UTC m=+0.254274660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:47:44.608030 kubelet[2155]: W0508 00:47:44.607668 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:44.608030 kubelet[2155]: E0508 00:47:44.607710 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:44.608030 kubelet[2155]: E0508 00:47:44.607951 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:44.608319 kubelet[2155]: E0508 00:47:44.608027 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.152:6443: connect: connection refused" interval="200ms" May 8 00:47:44.608560 kubelet[2155]: E0508 00:47:44.608542 2155 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:47:44.609199 kubelet[2155]: I0508 00:47:44.609178 2155 factory.go:221] Registration of the containerd container factory successfully May 8 00:47:44.609199 kubelet[2155]: I0508 00:47:44.609195 2155 factory.go:221] Registration of the systemd container factory successfully May 8 00:47:44.609296 kubelet[2155]: I0508 00:47:44.609276 2155 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:47:44.623122 kubelet[2155]: I0508 00:47:44.623094 2155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:47:44.623260 kubelet[2155]: I0508 00:47:44.623234 2155 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:47:44.623260 kubelet[2155]: I0508 00:47:44.623248 2155 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:47:44.623260 kubelet[2155]: I0508 00:47:44.623266 2155 state_mem.go:36] "Initialized new in-memory state store" May 8 00:47:44.624465 kubelet[2155]: I0508 00:47:44.624439 2155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:47:44.624533 kubelet[2155]: I0508 00:47:44.624481 2155 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:47:44.624533 kubelet[2155]: I0508 00:47:44.624497 2155 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:47:44.624576 kubelet[2155]: E0508 00:47:44.624549 2155 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:47:44.709088 kubelet[2155]: E0508 00:47:44.709051 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:44.725243 kubelet[2155]: E0508 00:47:44.725213 2155 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:47:44.808916 kubelet[2155]: E0508 00:47:44.808881 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.152:6443: connect: connection refused" interval="400ms" May 8 00:47:44.809919 kubelet[2155]: E0508 00:47:44.809876 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:44.910720 kubelet[2155]: E0508 00:47:44.910680 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:44.925879 kubelet[2155]: E0508 00:47:44.925846 2155 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:47:44.974178 kubelet[2155]: W0508 00:47:44.974109 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:44.974221 kubelet[2155]: E0508 00:47:44.974204 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:44.974375 kubelet[2155]: I0508 00:47:44.974339 2155 policy_none.go:49] "None policy: Start" May 8 00:47:44.975187 kubelet[2155]: I0508 00:47:44.975146 2155 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:47:44.975233 kubelet[2155]: I0508 00:47:44.975193 2155 state_mem.go:35] "Initializing new in-memory state store" May 8 00:47:44.994194 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:47:45.008274 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:47:45.011085 kubelet[2155]: E0508 00:47:45.011045 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:45.011368 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:47:45.023436 kubelet[2155]: I0508 00:47:45.023343 2155 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:47:45.023662 kubelet[2155]: I0508 00:47:45.023627 2155 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:47:45.023714 kubelet[2155]: I0508 00:47:45.023645 2155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:47:45.023913 kubelet[2155]: I0508 00:47:45.023893 2155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:47:45.025377 kubelet[2155]: E0508 00:47:45.025353 2155 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:47:45.125910 kubelet[2155]: I0508 00:47:45.125878 2155 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:47:45.126286 kubelet[2155]: E0508 00:47:45.126260 2155 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.152:6443/api/v1/nodes\": dial tcp 10.0.0.152:6443: connect: connection refused" node="localhost" May 8 00:47:45.210317 kubelet[2155]: E0508 00:47:45.210176 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.152:6443: connect: connection refused" interval="800ms" May 8 00:47:45.328114 kubelet[2155]: I0508 00:47:45.328068 2155 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:47:45.328501 kubelet[2155]: E0508 00:47:45.328456 2155 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.152:6443/api/v1/nodes\": dial tcp 10.0.0.152:6443: connect: connection refused" node="localhost" May 8 00:47:45.334735 systemd[1]: Created slice kubepods-burstable-pod4bbbdd6bad1af639a4bacbb8784cfd54.slice - libcontainer container kubepods-burstable-pod4bbbdd6bad1af639a4bacbb8784cfd54.slice. May 8 00:47:45.350303 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:47:45.365199 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:47:45.412196 kubelet[2155]: I0508 00:47:45.412158 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bbbdd6bad1af639a4bacbb8784cfd54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bbbdd6bad1af639a4bacbb8784cfd54\") " pod="kube-system/kube-apiserver-localhost" May 8 00:47:45.412196 kubelet[2155]: I0508 00:47:45.412193 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bbbdd6bad1af639a4bacbb8784cfd54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4bbbdd6bad1af639a4bacbb8784cfd54\") " pod="kube-system/kube-apiserver-localhost" May 8 00:47:45.412196 kubelet[2155]: I0508 00:47:45.412213 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:45.412703 kubelet[2155]: I0508 00:47:45.412227 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:45.412703 kubelet[2155]: I0508 00:47:45.412243 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:45.412703 kubelet[2155]: I0508 00:47:45.412258 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:45.412703 kubelet[2155]: I0508 00:47:45.412271 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bbbdd6bad1af639a4bacbb8784cfd54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bbbdd6bad1af639a4bacbb8784cfd54\") " pod="kube-system/kube-apiserver-localhost" May 8 00:47:45.412703 kubelet[2155]: I0508 00:47:45.412286 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:45.412810 kubelet[2155]: I0508 00:47:45.412300 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:47:45.648731 kubelet[2155]: E0508 00:47:45.648694 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:45.649451 containerd[1468]: time="2025-05-08T00:47:45.649397703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4bbbdd6bad1af639a4bacbb8784cfd54,Namespace:kube-system,Attempt:0,}" May 8 00:47:45.662615 kubelet[2155]: E0508 00:47:45.662589 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:45.663034 containerd[1468]: time="2025-05-08T00:47:45.662989083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:47:45.668249 kubelet[2155]: E0508 00:47:45.668218 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:45.668684 containerd[1468]: time="2025-05-08T00:47:45.668630846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:47:45.730086 kubelet[2155]: I0508 00:47:45.730041 2155 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:47:45.730515 kubelet[2155]: E0508 00:47:45.730456 2155 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.152:6443/api/v1/nodes\": dial tcp 10.0.0.152:6443: connect: connection refused" node="localhost" May 8 00:47:45.947921 kubelet[2155]: W0508 00:47:45.947791 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:45.947921 kubelet[2155]: E0508 00:47:45.947859 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:46.009810 kubelet[2155]: W0508 00:47:46.009740 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:46.009943 kubelet[2155]: E0508 00:47:46.009815 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:46.011042 kubelet[2155]: E0508 00:47:46.010988 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.152:6443: connect: connection refused" interval="1.6s" May 8 00:47:46.070854 kubelet[2155]: W0508 00:47:46.070775 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:46.070913 kubelet[2155]: E0508 00:47:46.070859 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:46.075411 kubelet[2155]: W0508 00:47:46.075357 2155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.152:6443: connect: connection refused May 8 00:47:46.075411 kubelet[2155]: E0508 00:47:46.075392 2155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:46.396649 kubelet[2155]: E0508 00:47:46.396507 2155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.152:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.152:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66cdce1c32b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:47:44.601903797 +0000 UTC m=+0.254274660,LastTimestamp:2025-05-08 00:47:44.601903797 +0000 UTC m=+0.254274660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:47:46.532343 kubelet[2155]: I0508 00:47:46.532300 2155 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:47:46.532693 kubelet[2155]: E0508 00:47:46.532647 2155 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.152:6443/api/v1/nodes\": dial tcp 10.0.0.152:6443: connect: connection refused" node="localhost" May 8 00:47:46.579429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055273443.mount: Deactivated successfully. May 8 00:47:46.589354 containerd[1468]: time="2025-05-08T00:47:46.589309012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:47:46.590357 containerd[1468]: time="2025-05-08T00:47:46.590290669Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:47:46.591391 containerd[1468]: time="2025-05-08T00:47:46.591334755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:47:46.591855 kubelet[2155]: E0508 00:47:46.591816 2155 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.152:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.152:6443: connect: connection refused" logger="UnhandledError" May 8 00:47:46.592477 containerd[1468]: time="2025-05-08T00:47:46.592443755Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:47:46.593286 containerd[1468]: time="2025-05-08T00:47:46.593212467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:47:46.594224 containerd[1468]: time="2025-05-08T00:47:46.594193563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:47:46.595257 containerd[1468]: time="2025-05-08T00:47:46.595213505Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:47:46.598868 containerd[1468]: time="2025-05-08T00:47:46.598837098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:47:46.599759 containerd[1468]: time="2025-05-08T00:47:46.599715729Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 936.632328ms" May 8 00:47:46.600889 containerd[1468]: time="2025-05-08T00:47:46.600857441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 951.37661ms" May 8 00:47:46.601962 containerd[1468]: time="2025-05-08T00:47:46.601926385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 933.21736ms" May 8 00:47:46.770612 containerd[1468]: time="2025-05-08T00:47:46.769978082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:46.770612 containerd[1468]: time="2025-05-08T00:47:46.770030341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:46.770612 containerd[1468]: time="2025-05-08T00:47:46.770041071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:46.770612 containerd[1468]: time="2025-05-08T00:47:46.770112728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:46.772431 containerd[1468]: time="2025-05-08T00:47:46.772347730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:46.772431 containerd[1468]: time="2025-05-08T00:47:46.772407404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:46.772692 containerd[1468]: time="2025-05-08T00:47:46.772485693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:46.772692 containerd[1468]: time="2025-05-08T00:47:46.772314587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:46.773011 containerd[1468]: time="2025-05-08T00:47:46.772840708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:46.773011 containerd[1468]: time="2025-05-08T00:47:46.772857049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:46.773329 containerd[1468]: time="2025-05-08T00:47:46.773235790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:46.774285 containerd[1468]: time="2025-05-08T00:47:46.773990886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:46.799706 systemd[1]: Started cri-containerd-0aa89447aabe26bb90202633a3e6659d19522bb68fecc8401264a377d072c9a0.scope - libcontainer container 0aa89447aabe26bb90202633a3e6659d19522bb68fecc8401264a377d072c9a0. May 8 00:47:46.801613 systemd[1]: Started cri-containerd-78919802f91c26f63336f9164604525fdbcb3b5ae17c3bf834af5dd0cf8e15f4.scope - libcontainer container 78919802f91c26f63336f9164604525fdbcb3b5ae17c3bf834af5dd0cf8e15f4. May 8 00:47:46.803931 systemd[1]: Started cri-containerd-e7b781259c80321678cb51fc502793dbed8f839097921ed4acadf846645f60b5.scope - libcontainer container e7b781259c80321678cb51fc502793dbed8f839097921ed4acadf846645f60b5. May 8 00:47:46.842553 containerd[1468]: time="2025-05-08T00:47:46.842490187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aa89447aabe26bb90202633a3e6659d19522bb68fecc8401264a377d072c9a0\"" May 8 00:47:46.844427 containerd[1468]: time="2025-05-08T00:47:46.844336820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"78919802f91c26f63336f9164604525fdbcb3b5ae17c3bf834af5dd0cf8e15f4\"" May 8 00:47:46.844983 kubelet[2155]: E0508 00:47:46.844961 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:46.845288 kubelet[2155]: E0508 00:47:46.845071 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:46.848600 containerd[1468]: time="2025-05-08T00:47:46.848319697Z" level=info msg="CreateContainer within sandbox \"0aa89447aabe26bb90202633a3e6659d19522bb68fecc8401264a377d072c9a0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:47:46.849121 containerd[1468]: time="2025-05-08T00:47:46.849101314Z" level=info msg="CreateContainer within sandbox \"78919802f91c26f63336f9164604525fdbcb3b5ae17c3bf834af5dd0cf8e15f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:47:46.850473 containerd[1468]: time="2025-05-08T00:47:46.849286536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4bbbdd6bad1af639a4bacbb8784cfd54,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7b781259c80321678cb51fc502793dbed8f839097921ed4acadf846645f60b5\"" May 8 00:47:46.851093 kubelet[2155]: E0508 00:47:46.851066 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:46.852692 containerd[1468]: time="2025-05-08T00:47:46.852649183Z" level=info msg="CreateContainer within sandbox \"e7b781259c80321678cb51fc502793dbed8f839097921ed4acadf846645f60b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:47:46.893190 containerd[1468]: time="2025-05-08T00:47:46.893148589Z" level=info msg="CreateContainer within sandbox \"0aa89447aabe26bb90202633a3e6659d19522bb68fecc8401264a377d072c9a0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac046542c155bb4f40bb22c33a585c24bb78ce2dd93bc1abe1443e6650e53e64\"" May 8 00:47:46.893789 containerd[1468]: time="2025-05-08T00:47:46.893751506Z" level=info msg="StartContainer for \"ac046542c155bb4f40bb22c33a585c24bb78ce2dd93bc1abe1443e6650e53e64\"" May 8 00:47:46.906407 containerd[1468]: time="2025-05-08T00:47:46.906368107Z" level=info msg="CreateContainer within sandbox \"78919802f91c26f63336f9164604525fdbcb3b5ae17c3bf834af5dd0cf8e15f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ba2f43504c59cdf7eeaa44bb5fe6ae5db7518794bc799e8567f4f8a50bdfb63\"" May 8 00:47:46.906825 containerd[1468]: time="2025-05-08T00:47:46.906788967Z" level=info msg="StartContainer for \"9ba2f43504c59cdf7eeaa44bb5fe6ae5db7518794bc799e8567f4f8a50bdfb63\"" May 8 00:47:46.908295 containerd[1468]: time="2025-05-08T00:47:46.908254626Z" level=info msg="CreateContainer within sandbox \"e7b781259c80321678cb51fc502793dbed8f839097921ed4acadf846645f60b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"679101c5bcb5e963e5575fbc38caa4f26b93a9bd95340b6692adf451a5092cea\"" May 8 00:47:46.908928 containerd[1468]: time="2025-05-08T00:47:46.908903439Z" level=info msg="StartContainer for \"679101c5bcb5e963e5575fbc38caa4f26b93a9bd95340b6692adf451a5092cea\"" May 8 00:47:46.920898 systemd[1]: Started cri-containerd-ac046542c155bb4f40bb22c33a585c24bb78ce2dd93bc1abe1443e6650e53e64.scope - libcontainer container ac046542c155bb4f40bb22c33a585c24bb78ce2dd93bc1abe1443e6650e53e64. May 8 00:47:46.946663 systemd[1]: Started cri-containerd-679101c5bcb5e963e5575fbc38caa4f26b93a9bd95340b6692adf451a5092cea.scope - libcontainer container 679101c5bcb5e963e5575fbc38caa4f26b93a9bd95340b6692adf451a5092cea. May 8 00:47:46.948288 systemd[1]: Started cri-containerd-9ba2f43504c59cdf7eeaa44bb5fe6ae5db7518794bc799e8567f4f8a50bdfb63.scope - libcontainer container 9ba2f43504c59cdf7eeaa44bb5fe6ae5db7518794bc799e8567f4f8a50bdfb63. May 8 00:47:46.984962 containerd[1468]: time="2025-05-08T00:47:46.984838637Z" level=info msg="StartContainer for \"ac046542c155bb4f40bb22c33a585c24bb78ce2dd93bc1abe1443e6650e53e64\" returns successfully" May 8 00:47:46.993996 containerd[1468]: time="2025-05-08T00:47:46.993622106Z" level=info msg="StartContainer for \"679101c5bcb5e963e5575fbc38caa4f26b93a9bd95340b6692adf451a5092cea\" returns successfully" May 8 00:47:46.999313 containerd[1468]: time="2025-05-08T00:47:46.998966544Z" level=info msg="StartContainer for \"9ba2f43504c59cdf7eeaa44bb5fe6ae5db7518794bc799e8567f4f8a50bdfb63\" returns successfully" May 8 00:47:47.638868 kubelet[2155]: E0508 00:47:47.638828 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:47.641885 kubelet[2155]: E0508 00:47:47.641856 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:47.641928 kubelet[2155]: E0508 00:47:47.641900 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:48.021660 kubelet[2155]: E0508 00:47:48.021579 2155 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:47:48.134302 kubelet[2155]: I0508 00:47:48.134260 2155 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:47:48.141274 kubelet[2155]: I0508 00:47:48.141213 2155 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:47:48.141274 kubelet[2155]: E0508 00:47:48.141262 2155 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:47:48.154604 kubelet[2155]: E0508 00:47:48.154564 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:48.255017 kubelet[2155]: E0508 00:47:48.254962 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:48.356124 kubelet[2155]: E0508 00:47:48.355986 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:48.456687 kubelet[2155]: E0508 00:47:48.456641 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:48.557193 kubelet[2155]: E0508 00:47:48.557136 2155 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:48.643988 kubelet[2155]: E0508 00:47:48.643954 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:49.596592 kubelet[2155]: I0508 00:47:49.596541 2155 apiserver.go:52] "Watching apiserver" May 8 00:47:49.607716 kubelet[2155]: I0508 00:47:49.607688 2155 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:47:50.282385 systemd[1]: Reloading requested from client PID 2436 ('systemctl') (unit session-7.scope)... May 8 00:47:50.282403 systemd[1]: Reloading... May 8 00:47:50.360617 zram_generator::config[2478]: No configuration found. May 8 00:47:50.465267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:47:50.557941 systemd[1]: Reloading finished in 274 ms. May 8 00:47:50.600086 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:50.614767 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:47:50.615064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:50.623042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:47:50.763141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:47:50.767614 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:47:50.804305 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:47:50.804305 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:47:50.804305 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:47:50.804721 kubelet[2520]: I0508 00:47:50.804362 2520 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:47:50.810267 kubelet[2520]: I0508 00:47:50.810100 2520 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:47:50.810267 kubelet[2520]: I0508 00:47:50.810123 2520 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:47:50.810379 kubelet[2520]: I0508 00:47:50.810339 2520 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:47:50.811755 kubelet[2520]: I0508 00:47:50.811730 2520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:47:50.815454 kubelet[2520]: I0508 00:47:50.815423 2520 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:47:50.818303 kubelet[2520]: E0508 00:47:50.818276 2520 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:47:50.818345 kubelet[2520]: I0508 00:47:50.818304 2520 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:47:50.822794 kubelet[2520]: I0508 00:47:50.822767 2520 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:47:50.822946 kubelet[2520]: I0508 00:47:50.822919 2520 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:47:50.823095 kubelet[2520]: I0508 00:47:50.823056 2520 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:47:50.823260 kubelet[2520]: I0508 00:47:50.823083 2520 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:47:50.823260 kubelet[2520]: I0508 00:47:50.823252 2520 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:47:50.823260 kubelet[2520]: I0508 00:47:50.823260 2520 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:47:50.823383 kubelet[2520]: I0508 00:47:50.823295 2520 state_mem.go:36] "Initialized new in-memory state store" May 8 00:47:50.823451 kubelet[2520]: I0508 00:47:50.823434 2520 kubelet.go:408] "Attempting to sync node with API server" May 8 00:47:50.823490 kubelet[2520]: I0508 00:47:50.823453 2520 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:47:50.823490 kubelet[2520]: I0508 00:47:50.823490 2520 kubelet.go:314] "Adding apiserver pod source" May 8 00:47:50.823546 kubelet[2520]: I0508 00:47:50.823506 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:47:50.824143 kubelet[2520]: I0508 00:47:50.824019 2520 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:47:50.824449 kubelet[2520]: I0508 00:47:50.824433 2520 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:47:50.824982 kubelet[2520]: I0508 00:47:50.824817 2520 server.go:1269] "Started kubelet" May 8 00:47:50.826774 kubelet[2520]: I0508 00:47:50.825207 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:47:50.828437 kubelet[2520]: I0508 00:47:50.828412 2520 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:47:50.828632 kubelet[2520]: I0508 00:47:50.826655 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:47:50.832510 kubelet[2520]: I0508 00:47:50.832469 2520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:47:50.833782 kubelet[2520]: I0508 00:47:50.833766 2520 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:47:50.833870 kubelet[2520]: I0508 00:47:50.833854 2520 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:47:50.834565 kubelet[2520]: E0508 00:47:50.833940 2520 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:47:50.834565 kubelet[2520]: I0508 00:47:50.834120 2520 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:47:50.834565 kubelet[2520]: I0508 00:47:50.834260 2520 reconciler.go:26] "Reconciler: start to sync state" May 8 00:47:50.834925 kubelet[2520]: I0508 00:47:50.834906 2520 server.go:460] "Adding debug handlers to kubelet server" May 8 00:47:50.837866 kubelet[2520]: E0508 00:47:50.837844 2520 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:47:50.838396 kubelet[2520]: I0508 00:47:50.838364 2520 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:47:50.840829 kubelet[2520]: I0508 00:47:50.839179 2520 factory.go:221] Registration of the containerd container factory successfully May 8 00:47:50.840829 kubelet[2520]: I0508 00:47:50.839194 2520 factory.go:221] Registration of the systemd container factory successfully May 8 00:47:50.845240 kubelet[2520]: I0508 00:47:50.844918 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:47:50.846222 kubelet[2520]: I0508 00:47:50.846199 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:47:50.846260 kubelet[2520]: I0508 00:47:50.846231 2520 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:47:50.846260 kubelet[2520]: I0508 00:47:50.846247 2520 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:47:50.846313 kubelet[2520]: E0508 00:47:50.846287 2520 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:47:50.875780 kubelet[2520]: I0508 00:47:50.875750 2520 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:47:50.875780 kubelet[2520]: I0508 00:47:50.875767 2520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:47:50.875780 kubelet[2520]: I0508 00:47:50.875787 2520 state_mem.go:36] "Initialized new in-memory state store" May 8 00:47:50.875996 kubelet[2520]: I0508 00:47:50.875940 2520 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:47:50.875996 kubelet[2520]: I0508 00:47:50.875953 2520 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:47:50.875996 kubelet[2520]: I0508 00:47:50.875970 2520 policy_none.go:49] "None policy: Start" May 8 00:47:50.876511 kubelet[2520]: I0508 00:47:50.876491 2520 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:47:50.876511 kubelet[2520]: I0508 00:47:50.876513 2520 state_mem.go:35] "Initializing new in-memory state store" May 8 00:47:50.876724 kubelet[2520]: I0508 00:47:50.876698 2520 state_mem.go:75] "Updated machine memory state" May 8 00:47:50.880559 kubelet[2520]: I0508 00:47:50.880538 2520 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:47:50.880751 kubelet[2520]: I0508 00:47:50.880706 2520 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:47:50.880751 kubelet[2520]: I0508 00:47:50.880724 2520 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:47:50.880926 kubelet[2520]: I0508 00:47:50.880903 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:47:50.886346 update_engine[1452]: I20250508 00:47:50.885499 1452 update_attempter.cc:509] Updating boot flags... May 8 00:47:50.986152 kubelet[2520]: I0508 00:47:50.986119 2520 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:47:51.034642 kubelet[2520]: I0508 00:47:51.034600 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:51.034642 kubelet[2520]: I0508 00:47:51.034637 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:51.034642 kubelet[2520]: I0508 00:47:51.034656 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bbbdd6bad1af639a4bacbb8784cfd54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bbbdd6bad1af639a4bacbb8784cfd54\") " pod="kube-system/kube-apiserver-localhost" May 8 00:47:51.034850 kubelet[2520]: I0508 00:47:51.034675 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bbbdd6bad1af639a4bacbb8784cfd54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4bbbdd6bad1af639a4bacbb8784cfd54\") " pod="kube-system/kube-apiserver-localhost" May 8 00:47:51.034850 kubelet[2520]: I0508 00:47:51.034690 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:51.034850 kubelet[2520]: I0508 00:47:51.034707 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:51.034850 kubelet[2520]: I0508 00:47:51.034780 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:47:51.034850 kubelet[2520]: I0508 00:47:51.034817 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bbbdd6bad1af639a4bacbb8784cfd54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bbbdd6bad1af639a4bacbb8784cfd54\") " pod="kube-system/kube-apiserver-localhost" May 8 00:47:51.034970 kubelet[2520]: I0508 00:47:51.034840 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:47:51.169803 kubelet[2520]: E0508 00:47:51.169708 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:51.170989 kubelet[2520]: E0508 00:47:51.169830 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:51.170989 kubelet[2520]: E0508 00:47:51.170649 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:51.173094 kubelet[2520]: I0508 00:47:51.172332 2520 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:47:51.173094 kubelet[2520]: I0508 00:47:51.172391 2520 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:47:51.192840 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2558) May 8 00:47:51.240560 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2561) May 8 00:47:51.269546 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2561) May 8 00:47:51.824844 kubelet[2520]: I0508 00:47:51.824780 2520 apiserver.go:52] "Watching apiserver" May 8 00:47:51.834457 kubelet[2520]: I0508 00:47:51.834394 2520 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:47:51.862499 kubelet[2520]: E0508 00:47:51.860797 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:51.862499 kubelet[2520]: E0508 00:47:51.861506 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:51.865929 kubelet[2520]: E0508 00:47:51.865885 2520 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:47:51.866132 kubelet[2520]: E0508 00:47:51.866109 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:51.901022 kubelet[2520]: I0508 00:47:51.900944 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.900927853 podStartE2EDuration="900.927853ms" podCreationTimestamp="2025-05-08 00:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:51.881883132 +0000 UTC m=+1.109986072" watchObservedRunningTime="2025-05-08 00:47:51.900927853 +0000 UTC m=+1.129030793" May 8 00:47:51.912072 kubelet[2520]: I0508 00:47:51.912011 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.911989235 podStartE2EDuration="911.989235ms" podCreationTimestamp="2025-05-08 00:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:51.911784687 +0000 UTC m=+1.139887627" watchObservedRunningTime="2025-05-08 00:47:51.911989235 +0000 UTC m=+1.140092175" May 8 00:47:51.912275 kubelet[2520]: I0508 00:47:51.912123 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.912119942 podStartE2EDuration="912.119942ms" podCreationTimestamp="2025-05-08 00:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:51.901187816 +0000 UTC m=+1.129290756" watchObservedRunningTime="2025-05-08 00:47:51.912119942 +0000 UTC m=+1.140222882" May 8 00:47:52.861493 kubelet[2520]: E0508 00:47:52.861294 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:55.237633 sudo[1647]: pam_unix(sudo:session): session closed for user root May 8 00:47:55.239457 sshd[1644]: pam_unix(sshd:session): session closed for user core May 8 00:47:55.243377 systemd[1]: sshd@6-10.0.0.152:22-10.0.0.1:43902.service: Deactivated successfully. May 8 00:47:55.245356 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:47:55.245551 systemd[1]: session-7.scope: Consumed 4.605s CPU time, 156.9M memory peak, 0B memory swap peak. May 8 00:47:55.245950 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. May 8 00:47:55.246917 systemd-logind[1450]: Removed session 7. May 8 00:47:55.800148 kubelet[2520]: I0508 00:47:55.800100 2520 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:47:55.800636 containerd[1468]: time="2025-05-08T00:47:55.800597633Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:47:55.800965 kubelet[2520]: I0508 00:47:55.800893 2520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:47:56.643558 systemd[1]: Created slice kubepods-besteffort-pod5e77dc19_0998_4017_9f85_f7bd5703cb02.slice - libcontainer container kubepods-besteffort-pod5e77dc19_0998_4017_9f85_f7bd5703cb02.slice. May 8 00:47:56.675274 kubelet[2520]: I0508 00:47:56.674720 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e77dc19-0998-4017-9f85-f7bd5703cb02-kube-proxy\") pod \"kube-proxy-fjkmn\" (UID: \"5e77dc19-0998-4017-9f85-f7bd5703cb02\") " pod="kube-system/kube-proxy-fjkmn" May 8 00:47:56.675274 kubelet[2520]: I0508 00:47:56.674772 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e77dc19-0998-4017-9f85-f7bd5703cb02-xtables-lock\") pod \"kube-proxy-fjkmn\" (UID: \"5e77dc19-0998-4017-9f85-f7bd5703cb02\") " pod="kube-system/kube-proxy-fjkmn" May 8 00:47:56.675274 kubelet[2520]: I0508 00:47:56.674794 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e77dc19-0998-4017-9f85-f7bd5703cb02-lib-modules\") pod \"kube-proxy-fjkmn\" (UID: \"5e77dc19-0998-4017-9f85-f7bd5703cb02\") " pod="kube-system/kube-proxy-fjkmn" May 8 00:47:56.675274 kubelet[2520]: I0508 00:47:56.674815 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzndv\" (UniqueName: \"kubernetes.io/projected/5e77dc19-0998-4017-9f85-f7bd5703cb02-kube-api-access-qzndv\") pod \"kube-proxy-fjkmn\" (UID: \"5e77dc19-0998-4017-9f85-f7bd5703cb02\") " pod="kube-system/kube-proxy-fjkmn" May 8 00:47:56.814841 systemd[1]: Created slice kubepods-besteffort-pod146fe5bf_d857_4b8e_8817_11f3fe5de23d.slice - libcontainer container kubepods-besteffort-pod146fe5bf_d857_4b8e_8817_11f3fe5de23d.slice. May 8 00:47:56.877052 kubelet[2520]: I0508 00:47:56.876992 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tfs\" (UniqueName: \"kubernetes.io/projected/146fe5bf-d857-4b8e-8817-11f3fe5de23d-kube-api-access-84tfs\") pod \"tigera-operator-6f6897fdc5-6crmp\" (UID: \"146fe5bf-d857-4b8e-8817-11f3fe5de23d\") " pod="tigera-operator/tigera-operator-6f6897fdc5-6crmp" May 8 00:47:56.877052 kubelet[2520]: I0508 00:47:56.877047 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/146fe5bf-d857-4b8e-8817-11f3fe5de23d-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-6crmp\" (UID: \"146fe5bf-d857-4b8e-8817-11f3fe5de23d\") " pod="tigera-operator/tigera-operator-6f6897fdc5-6crmp" May 8 00:47:56.952685 kubelet[2520]: E0508 00:47:56.952556 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:56.953507 containerd[1468]: time="2025-05-08T00:47:56.953466683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fjkmn,Uid:5e77dc19-0998-4017-9f85-f7bd5703cb02,Namespace:kube-system,Attempt:0,}" May 8 00:47:56.984761 containerd[1468]: time="2025-05-08T00:47:56.984403746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:56.984761 containerd[1468]: time="2025-05-08T00:47:56.984464561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:56.984761 containerd[1468]: time="2025-05-08T00:47:56.984478567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:56.984761 containerd[1468]: time="2025-05-08T00:47:56.984606449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:57.013652 systemd[1]: Started cri-containerd-d6777cc179ffdf5d9368b2a891dd097bf7a55b2a3e37160dd25490b14559d88a.scope - libcontainer container d6777cc179ffdf5d9368b2a891dd097bf7a55b2a3e37160dd25490b14559d88a. May 8 00:47:57.038723 containerd[1468]: time="2025-05-08T00:47:57.038669358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fjkmn,Uid:5e77dc19-0998-4017-9f85-f7bd5703cb02,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6777cc179ffdf5d9368b2a891dd097bf7a55b2a3e37160dd25490b14559d88a\"" May 8 00:47:57.039516 kubelet[2520]: E0508 00:47:57.039490 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:57.041428 containerd[1468]: time="2025-05-08T00:47:57.041401669Z" level=info msg="CreateContainer within sandbox \"d6777cc179ffdf5d9368b2a891dd097bf7a55b2a3e37160dd25490b14559d88a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:47:57.060138 containerd[1468]: time="2025-05-08T00:47:57.060078847Z" level=info msg="CreateContainer within sandbox \"d6777cc179ffdf5d9368b2a891dd097bf7a55b2a3e37160dd25490b14559d88a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2806092b7c5fcbf69d6e949848190dc2787cdf82a798468ae34f53ecf126e3e8\"" May 8 00:47:57.060776 containerd[1468]: time="2025-05-08T00:47:57.060741408Z" level=info msg="StartContainer for \"2806092b7c5fcbf69d6e949848190dc2787cdf82a798468ae34f53ecf126e3e8\"" May 8 00:47:57.094671 systemd[1]: Started cri-containerd-2806092b7c5fcbf69d6e949848190dc2787cdf82a798468ae34f53ecf126e3e8.scope - libcontainer container 2806092b7c5fcbf69d6e949848190dc2787cdf82a798468ae34f53ecf126e3e8. May 8 00:47:57.119308 containerd[1468]: time="2025-05-08T00:47:57.119254829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-6crmp,Uid:146fe5bf-d857-4b8e-8817-11f3fe5de23d,Namespace:tigera-operator,Attempt:0,}" May 8 00:47:57.130358 containerd[1468]: time="2025-05-08T00:47:57.130242763Z" level=info msg="StartContainer for \"2806092b7c5fcbf69d6e949848190dc2787cdf82a798468ae34f53ecf126e3e8\" returns successfully" May 8 00:47:57.151388 containerd[1468]: time="2025-05-08T00:47:57.151149573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:57.151388 containerd[1468]: time="2025-05-08T00:47:57.151259290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:57.151388 containerd[1468]: time="2025-05-08T00:47:57.151285891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:57.151388 containerd[1468]: time="2025-05-08T00:47:57.151405767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:57.175788 systemd[1]: Started cri-containerd-588ee96b78a6aa6c644856a059690abd1bb6e25c19dbc0d862cac9fd49f5421b.scope - libcontainer container 588ee96b78a6aa6c644856a059690abd1bb6e25c19dbc0d862cac9fd49f5421b. May 8 00:47:57.218159 containerd[1468]: time="2025-05-08T00:47:57.218013547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-6crmp,Uid:146fe5bf-d857-4b8e-8817-11f3fe5de23d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"588ee96b78a6aa6c644856a059690abd1bb6e25c19dbc0d862cac9fd49f5421b\"" May 8 00:47:57.219934 containerd[1468]: time="2025-05-08T00:47:57.219888549Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:47:57.871936 kubelet[2520]: E0508 00:47:57.871895 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:57.880303 kubelet[2520]: I0508 00:47:57.880202 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fjkmn" podStartSLOduration=1.880179652 podStartE2EDuration="1.880179652s" podCreationTimestamp="2025-05-08 00:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:57.879925251 +0000 UTC m=+7.108028221" watchObservedRunningTime="2025-05-08 00:47:57.880179652 +0000 UTC m=+7.108282592" May 8 00:47:58.354250 kubelet[2520]: E0508 00:47:58.354208 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:58.873606 kubelet[2520]: E0508 00:47:58.873555 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:59.267736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918876822.mount: Deactivated successfully. May 8 00:47:59.669582 containerd[1468]: time="2025-05-08T00:47:59.669505229Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:59.670376 containerd[1468]: time="2025-05-08T00:47:59.670312754Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:47:59.671715 containerd[1468]: time="2025-05-08T00:47:59.671687558Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:59.673966 containerd[1468]: time="2025-05-08T00:47:59.673933256Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:59.674543 containerd[1468]: time="2025-05-08T00:47:59.674491960Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.454352528s" May 8 00:47:59.674587 containerd[1468]: time="2025-05-08T00:47:59.674548938Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:47:59.676460 containerd[1468]: time="2025-05-08T00:47:59.676425539Z" level=info msg="CreateContainer within sandbox \"588ee96b78a6aa6c644856a059690abd1bb6e25c19dbc0d862cac9fd49f5421b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:47:59.689084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3252550096.mount: Deactivated successfully. May 8 00:47:59.690658 containerd[1468]: time="2025-05-08T00:47:59.690618886Z" level=info msg="CreateContainer within sandbox \"588ee96b78a6aa6c644856a059690abd1bb6e25c19dbc0d862cac9fd49f5421b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"082a09ec638ffb6d67effd01ef194cf087322e73cc035da2df9b3b47f07b23ae\"" May 8 00:47:59.690989 containerd[1468]: time="2025-05-08T00:47:59.690962805Z" level=info msg="StartContainer for \"082a09ec638ffb6d67effd01ef194cf087322e73cc035da2df9b3b47f07b23ae\"" May 8 00:47:59.724646 systemd[1]: Started cri-containerd-082a09ec638ffb6d67effd01ef194cf087322e73cc035da2df9b3b47f07b23ae.scope - libcontainer container 082a09ec638ffb6d67effd01ef194cf087322e73cc035da2df9b3b47f07b23ae. May 8 00:47:59.734453 kubelet[2520]: E0508 00:47:59.734431 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:59.754586 containerd[1468]: time="2025-05-08T00:47:59.754482461Z" level=info msg="StartContainer for \"082a09ec638ffb6d67effd01ef194cf087322e73cc035da2df9b3b47f07b23ae\" returns successfully" May 8 00:47:59.876776 kubelet[2520]: E0508 00:47:59.876741 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:59.891991 kubelet[2520]: I0508 00:47:59.891930 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-6crmp" podStartSLOduration=1.4359500170000001 podStartE2EDuration="3.891914338s" podCreationTimestamp="2025-05-08 00:47:56 +0000 UTC" firstStartedPulling="2025-05-08 00:47:57.219363016 +0000 UTC m=+6.447465966" lastFinishedPulling="2025-05-08 00:47:59.675327357 +0000 UTC m=+8.903430287" observedRunningTime="2025-05-08 00:47:59.891657413 +0000 UTC m=+9.119760353" watchObservedRunningTime="2025-05-08 00:47:59.891914338 +0000 UTC m=+9.120017278" May 8 00:48:00.447173 kubelet[2520]: E0508 00:48:00.447135 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:00.878585 kubelet[2520]: E0508 00:48:00.878267 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:02.550090 systemd[1]: Created slice kubepods-besteffort-pod9ec0d077_be21_4315_a147_9a7a5ed2057f.slice - libcontainer container kubepods-besteffort-pod9ec0d077_be21_4315_a147_9a7a5ed2057f.slice. May 8 00:48:02.593459 systemd[1]: Created slice kubepods-besteffort-pod8184db77_2cc9_44f7_851a_93609772ab30.slice - libcontainer container kubepods-besteffort-pod8184db77_2cc9_44f7_851a_93609772ab30.slice. May 8 00:48:02.616079 kubelet[2520]: I0508 00:48:02.616030 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-flexvol-driver-host\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616079 kubelet[2520]: I0508 00:48:02.616069 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ec0d077-be21-4315-a147-9a7a5ed2057f-typha-certs\") pod \"calico-typha-7c765c4754-6vzj5\" (UID: \"9ec0d077-be21-4315-a147-9a7a5ed2057f\") " pod="calico-system/calico-typha-7c765c4754-6vzj5" May 8 00:48:02.616079 kubelet[2520]: I0508 00:48:02.616091 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-policysync\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616588 kubelet[2520]: I0508 00:48:02.616113 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-xtables-lock\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616588 kubelet[2520]: I0508 00:48:02.616175 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8184db77-2cc9-44f7-851a-93609772ab30-node-certs\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616588 kubelet[2520]: I0508 00:48:02.616262 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ec0d077-be21-4315-a147-9a7a5ed2057f-tigera-ca-bundle\") pod \"calico-typha-7c765c4754-6vzj5\" (UID: \"9ec0d077-be21-4315-a147-9a7a5ed2057f\") " pod="calico-system/calico-typha-7c765c4754-6vzj5" May 8 00:48:02.616588 kubelet[2520]: I0508 00:48:02.616289 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-cni-bin-dir\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616588 kubelet[2520]: I0508 00:48:02.616309 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glz4\" (UniqueName: \"kubernetes.io/projected/9ec0d077-be21-4315-a147-9a7a5ed2057f-kube-api-access-9glz4\") pod \"calico-typha-7c765c4754-6vzj5\" (UID: \"9ec0d077-be21-4315-a147-9a7a5ed2057f\") " pod="calico-system/calico-typha-7c765c4754-6vzj5" May 8 00:48:02.616727 kubelet[2520]: I0508 00:48:02.616329 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-lib-modules\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616727 kubelet[2520]: I0508 00:48:02.616347 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-cni-log-dir\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616727 kubelet[2520]: I0508 00:48:02.616368 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8184db77-2cc9-44f7-851a-93609772ab30-tigera-ca-bundle\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616727 kubelet[2520]: I0508 00:48:02.616403 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-var-run-calico\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616727 kubelet[2520]: I0508 00:48:02.616435 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-var-lib-calico\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616855 kubelet[2520]: I0508 00:48:02.616458 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk9mb\" (UniqueName: \"kubernetes.io/projected/8184db77-2cc9-44f7-851a-93609772ab30-kube-api-access-kk9mb\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.616855 kubelet[2520]: I0508 00:48:02.616477 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8184db77-2cc9-44f7-851a-93609772ab30-cni-net-dir\") pod \"calico-node-shwsq\" (UID: \"8184db77-2cc9-44f7-851a-93609772ab30\") " pod="calico-system/calico-node-shwsq" May 8 00:48:02.690966 kubelet[2520]: E0508 00:48:02.690905 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:02.717394 kubelet[2520]: I0508 00:48:02.717110 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e1a02ecd-8139-4fc8-add6-59265c14dd8e-registration-dir\") pod \"csi-node-driver-6svb9\" (UID: \"e1a02ecd-8139-4fc8-add6-59265c14dd8e\") " pod="calico-system/csi-node-driver-6svb9" May 8 00:48:02.717394 kubelet[2520]: I0508 00:48:02.717199 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktr4\" (UniqueName: \"kubernetes.io/projected/e1a02ecd-8139-4fc8-add6-59265c14dd8e-kube-api-access-jktr4\") pod \"csi-node-driver-6svb9\" (UID: \"e1a02ecd-8139-4fc8-add6-59265c14dd8e\") " pod="calico-system/csi-node-driver-6svb9" May 8 00:48:02.717394 kubelet[2520]: I0508 00:48:02.717243 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e1a02ecd-8139-4fc8-add6-59265c14dd8e-socket-dir\") pod \"csi-node-driver-6svb9\" (UID: \"e1a02ecd-8139-4fc8-add6-59265c14dd8e\") " pod="calico-system/csi-node-driver-6svb9" May 8 00:48:02.717394 kubelet[2520]: I0508 00:48:02.717265 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e1a02ecd-8139-4fc8-add6-59265c14dd8e-varrun\") pod \"csi-node-driver-6svb9\" (UID: \"e1a02ecd-8139-4fc8-add6-59265c14dd8e\") " pod="calico-system/csi-node-driver-6svb9" May 8 00:48:02.720268 kubelet[2520]: I0508 00:48:02.719853 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1a02ecd-8139-4fc8-add6-59265c14dd8e-kubelet-dir\") pod \"csi-node-driver-6svb9\" (UID: \"e1a02ecd-8139-4fc8-add6-59265c14dd8e\") " pod="calico-system/csi-node-driver-6svb9" May 8 00:48:02.726974 kubelet[2520]: E0508 00:48:02.726841 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.726974 kubelet[2520]: W0508 00:48:02.726861 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.726974 kubelet[2520]: E0508 00:48:02.726888 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.728503 kubelet[2520]: E0508 00:48:02.728438 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.728503 kubelet[2520]: W0508 00:48:02.728454 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.728503 kubelet[2520]: E0508 00:48:02.728467 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.736341 kubelet[2520]: E0508 00:48:02.736316 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.736674 kubelet[2520]: W0508 00:48:02.736460 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.736674 kubelet[2520]: E0508 00:48:02.736489 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.740295 kubelet[2520]: E0508 00:48:02.740272 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.740438 kubelet[2520]: W0508 00:48:02.740380 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.740438 kubelet[2520]: E0508 00:48:02.740405 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.821281 kubelet[2520]: E0508 00:48:02.821171 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.821664 kubelet[2520]: W0508 00:48:02.821431 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.821664 kubelet[2520]: E0508 00:48:02.821464 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.821963 kubelet[2520]: E0508 00:48:02.821949 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.822113 kubelet[2520]: W0508 00:48:02.822027 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.822113 kubelet[2520]: E0508 00:48:02.822060 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.822625 kubelet[2520]: E0508 00:48:02.822549 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.822625 kubelet[2520]: W0508 00:48:02.822563 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.822625 kubelet[2520]: E0508 00:48:02.822581 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.824911 kubelet[2520]: E0508 00:48:02.824291 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.824911 kubelet[2520]: W0508 00:48:02.824309 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.824911 kubelet[2520]: E0508 00:48:02.824321 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.825701 kubelet[2520]: E0508 00:48:02.825683 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.825701 kubelet[2520]: W0508 00:48:02.825698 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.825776 kubelet[2520]: E0508 00:48:02.825709 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.826104 kubelet[2520]: E0508 00:48:02.826086 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.826104 kubelet[2520]: W0508 00:48:02.826099 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.826188 kubelet[2520]: E0508 00:48:02.826108 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.828351 kubelet[2520]: E0508 00:48:02.828320 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.828351 kubelet[2520]: W0508 00:48:02.828348 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.828641 kubelet[2520]: E0508 00:48:02.828498 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.829041 kubelet[2520]: E0508 00:48:02.829012 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.829041 kubelet[2520]: W0508 00:48:02.829030 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.829194 kubelet[2520]: E0508 00:48:02.829147 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.829333 kubelet[2520]: E0508 00:48:02.829312 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.829333 kubelet[2520]: W0508 00:48:02.829323 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.829444 kubelet[2520]: E0508 00:48:02.829424 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.829643 kubelet[2520]: E0508 00:48:02.829626 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.829643 kubelet[2520]: W0508 00:48:02.829637 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.829833 kubelet[2520]: E0508 00:48:02.829724 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.829977 kubelet[2520]: E0508 00:48:02.829937 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.829977 kubelet[2520]: W0508 00:48:02.829970 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.830148 kubelet[2520]: E0508 00:48:02.830012 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.830478 kubelet[2520]: E0508 00:48:02.830429 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.830478 kubelet[2520]: W0508 00:48:02.830448 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.830478 kubelet[2520]: E0508 00:48:02.830472 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.830878 kubelet[2520]: E0508 00:48:02.830840 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.830878 kubelet[2520]: W0508 00:48:02.830863 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.831654 kubelet[2520]: E0508 00:48:02.830978 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.831866 kubelet[2520]: E0508 00:48:02.831851 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.831931 kubelet[2520]: W0508 00:48:02.831866 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.832072 kubelet[2520]: E0508 00:48:02.831997 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.832244 kubelet[2520]: E0508 00:48:02.832143 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.832244 kubelet[2520]: W0508 00:48:02.832159 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.832329 kubelet[2520]: E0508 00:48:02.832278 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.832599 kubelet[2520]: E0508 00:48:02.832559 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.832599 kubelet[2520]: W0508 00:48:02.832591 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.832820 kubelet[2520]: E0508 00:48:02.832710 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.833017 kubelet[2520]: E0508 00:48:02.832979 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.833017 kubelet[2520]: W0508 00:48:02.832996 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.833017 kubelet[2520]: E0508 00:48:02.833014 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.833311 kubelet[2520]: E0508 00:48:02.833293 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.833354 kubelet[2520]: W0508 00:48:02.833311 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.833354 kubelet[2520]: E0508 00:48:02.833330 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.833640 kubelet[2520]: E0508 00:48:02.833622 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.833640 kubelet[2520]: W0508 00:48:02.833639 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.833794 kubelet[2520]: E0508 00:48:02.833767 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.834058 kubelet[2520]: E0508 00:48:02.834031 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.834058 kubelet[2520]: W0508 00:48:02.834053 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.834225 kubelet[2520]: E0508 00:48:02.834145 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.834385 kubelet[2520]: E0508 00:48:02.834362 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.834385 kubelet[2520]: W0508 00:48:02.834382 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.834542 kubelet[2520]: E0508 00:48:02.834458 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.834743 kubelet[2520]: E0508 00:48:02.834724 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.834821 kubelet[2520]: W0508 00:48:02.834743 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.834821 kubelet[2520]: E0508 00:48:02.834787 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.835074 kubelet[2520]: E0508 00:48:02.835053 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.835074 kubelet[2520]: W0508 00:48:02.835071 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.835128 kubelet[2520]: E0508 00:48:02.835100 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.835662 kubelet[2520]: E0508 00:48:02.835595 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.835662 kubelet[2520]: W0508 00:48:02.835613 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.835662 kubelet[2520]: E0508 00:48:02.835631 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.835940 kubelet[2520]: E0508 00:48:02.835916 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.835940 kubelet[2520]: W0508 00:48:02.835936 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.836135 kubelet[2520]: E0508 00:48:02.835950 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.848416 kubelet[2520]: E0508 00:48:02.848370 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:02.848416 kubelet[2520]: W0508 00:48:02.848386 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:02.848416 kubelet[2520]: E0508 00:48:02.848400 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:02.853043 kubelet[2520]: E0508 00:48:02.853019 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:02.853396 containerd[1468]: time="2025-05-08T00:48:02.853356614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c765c4754-6vzj5,Uid:9ec0d077-be21-4315-a147-9a7a5ed2057f,Namespace:calico-system,Attempt:0,}" May 8 00:48:02.883191 containerd[1468]: time="2025-05-08T00:48:02.881968363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:02.883329 containerd[1468]: time="2025-05-08T00:48:02.883252965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:02.883329 containerd[1468]: time="2025-05-08T00:48:02.883305043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:02.883659 containerd[1468]: time="2025-05-08T00:48:02.883403209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:02.896768 kubelet[2520]: E0508 00:48:02.896472 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:02.897112 containerd[1468]: time="2025-05-08T00:48:02.897074512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-shwsq,Uid:8184db77-2cc9-44f7-851a-93609772ab30,Namespace:calico-system,Attempt:0,}" May 8 00:48:02.902900 systemd[1]: Started cri-containerd-cfd181a95264caf5ddce968f37bd92761f6e0b41a73186b61e59409b69d70c95.scope - libcontainer container cfd181a95264caf5ddce968f37bd92761f6e0b41a73186b61e59409b69d70c95. May 8 00:48:02.922574 containerd[1468]: time="2025-05-08T00:48:02.922400175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:02.922574 containerd[1468]: time="2025-05-08T00:48:02.922474055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:02.922574 containerd[1468]: time="2025-05-08T00:48:02.922485546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:02.923694 containerd[1468]: time="2025-05-08T00:48:02.923597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:02.947652 systemd[1]: Started cri-containerd-4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb.scope - libcontainer container 4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb. May 8 00:48:02.957471 containerd[1468]: time="2025-05-08T00:48:02.957427660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c765c4754-6vzj5,Uid:9ec0d077-be21-4315-a147-9a7a5ed2057f,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfd181a95264caf5ddce968f37bd92761f6e0b41a73186b61e59409b69d70c95\"" May 8 00:48:02.958339 kubelet[2520]: E0508 00:48:02.958191 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:02.959561 containerd[1468]: time="2025-05-08T00:48:02.959467164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:48:02.973733 containerd[1468]: time="2025-05-08T00:48:02.973610769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-shwsq,Uid:8184db77-2cc9-44f7-851a-93609772ab30,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb\"" May 8 00:48:02.974715 kubelet[2520]: E0508 00:48:02.974687 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:04.846949 kubelet[2520]: E0508 00:48:04.846886 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:05.834844 containerd[1468]: time="2025-05-08T00:48:05.834790209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:05.835762 containerd[1468]: time="2025-05-08T00:48:05.835720040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:48:05.837002 containerd[1468]: time="2025-05-08T00:48:05.836953554Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:05.839124 containerd[1468]: time="2025-05-08T00:48:05.839093084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:05.839699 containerd[1468]: time="2025-05-08T00:48:05.839657106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.880159725s" May 8 00:48:05.839753 containerd[1468]: time="2025-05-08T00:48:05.839699886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:48:05.840888 containerd[1468]: time="2025-05-08T00:48:05.840850894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:48:05.847297 containerd[1468]: time="2025-05-08T00:48:05.847258053Z" level=info msg="CreateContainer within sandbox \"cfd181a95264caf5ddce968f37bd92761f6e0b41a73186b61e59409b69d70c95\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:48:05.866151 containerd[1468]: time="2025-05-08T00:48:05.866109109Z" level=info msg="CreateContainer within sandbox \"cfd181a95264caf5ddce968f37bd92761f6e0b41a73186b61e59409b69d70c95\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"98cab7bd971eca597dc641cefdaef0e3db7b4a3b554c231bd45b0f1c0fd75254\"" May 8 00:48:05.866512 containerd[1468]: time="2025-05-08T00:48:05.866479497Z" level=info msg="StartContainer for \"98cab7bd971eca597dc641cefdaef0e3db7b4a3b554c231bd45b0f1c0fd75254\"" May 8 00:48:05.893245 systemd[1]: Started cri-containerd-98cab7bd971eca597dc641cefdaef0e3db7b4a3b554c231bd45b0f1c0fd75254.scope - libcontainer container 98cab7bd971eca597dc641cefdaef0e3db7b4a3b554c231bd45b0f1c0fd75254. May 8 00:48:05.935639 containerd[1468]: time="2025-05-08T00:48:05.935513996Z" level=info msg="StartContainer for \"98cab7bd971eca597dc641cefdaef0e3db7b4a3b554c231bd45b0f1c0fd75254\" returns successfully" May 8 00:48:06.847073 kubelet[2520]: E0508 00:48:06.847019 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:06.901558 kubelet[2520]: E0508 00:48:06.899702 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:06.931284 kubelet[2520]: E0508 00:48:06.931235 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.931284 kubelet[2520]: W0508 00:48:06.931272 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.931446 kubelet[2520]: E0508 00:48:06.931301 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.931618 kubelet[2520]: E0508 00:48:06.931600 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.931618 kubelet[2520]: W0508 00:48:06.931612 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.931670 kubelet[2520]: E0508 00:48:06.931621 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.931925 kubelet[2520]: E0508 00:48:06.931880 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.931964 kubelet[2520]: W0508 00:48:06.931920 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.931964 kubelet[2520]: E0508 00:48:06.931956 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.932217 kubelet[2520]: E0508 00:48:06.932193 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.932217 kubelet[2520]: W0508 00:48:06.932203 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.932217 kubelet[2520]: E0508 00:48:06.932212 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.932454 kubelet[2520]: E0508 00:48:06.932439 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.932454 kubelet[2520]: W0508 00:48:06.932450 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.932535 kubelet[2520]: E0508 00:48:06.932459 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.932704 kubelet[2520]: E0508 00:48:06.932683 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.932704 kubelet[2520]: W0508 00:48:06.932703 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.932752 kubelet[2520]: E0508 00:48:06.932711 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.932947 kubelet[2520]: E0508 00:48:06.932928 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.932947 kubelet[2520]: W0508 00:48:06.932945 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.933003 kubelet[2520]: E0508 00:48:06.932958 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.933189 kubelet[2520]: E0508 00:48:06.933173 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.933223 kubelet[2520]: W0508 00:48:06.933196 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.933223 kubelet[2520]: E0508 00:48:06.933207 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.933428 kubelet[2520]: E0508 00:48:06.933415 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.933455 kubelet[2520]: W0508 00:48:06.933426 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.933455 kubelet[2520]: E0508 00:48:06.933438 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.933655 kubelet[2520]: E0508 00:48:06.933640 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.933725 kubelet[2520]: W0508 00:48:06.933652 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.933725 kubelet[2520]: E0508 00:48:06.933673 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.933898 kubelet[2520]: E0508 00:48:06.933883 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.933898 kubelet[2520]: W0508 00:48:06.933896 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.933942 kubelet[2520]: E0508 00:48:06.933907 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.934109 kubelet[2520]: E0508 00:48:06.934094 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.934109 kubelet[2520]: W0508 00:48:06.934107 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.934160 kubelet[2520]: E0508 00:48:06.934117 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.934358 kubelet[2520]: E0508 00:48:06.934343 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.934358 kubelet[2520]: W0508 00:48:06.934356 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.934402 kubelet[2520]: E0508 00:48:06.934366 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.934581 kubelet[2520]: E0508 00:48:06.934567 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.934581 kubelet[2520]: W0508 00:48:06.934579 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.934636 kubelet[2520]: E0508 00:48:06.934589 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.934791 kubelet[2520]: E0508 00:48:06.934776 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.934791 kubelet[2520]: W0508 00:48:06.934788 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.934835 kubelet[2520]: E0508 00:48:06.934798 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.960264 kubelet[2520]: E0508 00:48:06.960231 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.960264 kubelet[2520]: W0508 00:48:06.960256 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.960327 kubelet[2520]: E0508 00:48:06.960279 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.960536 kubelet[2520]: E0508 00:48:06.960506 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.960574 kubelet[2520]: W0508 00:48:06.960517 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.960574 kubelet[2520]: E0508 00:48:06.960551 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.960865 kubelet[2520]: E0508 00:48:06.960835 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.960865 kubelet[2520]: W0508 00:48:06.960859 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.960919 kubelet[2520]: E0508 00:48:06.960892 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.961119 kubelet[2520]: E0508 00:48:06.961096 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.961119 kubelet[2520]: W0508 00:48:06.961112 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.961161 kubelet[2520]: E0508 00:48:06.961128 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.961331 kubelet[2520]: E0508 00:48:06.961308 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.961331 kubelet[2520]: W0508 00:48:06.961324 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.961379 kubelet[2520]: E0508 00:48:06.961340 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.961624 kubelet[2520]: E0508 00:48:06.961607 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.961624 kubelet[2520]: W0508 00:48:06.961621 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.961676 kubelet[2520]: E0508 00:48:06.961638 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.961896 kubelet[2520]: E0508 00:48:06.961882 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.961896 kubelet[2520]: W0508 00:48:06.961893 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.961953 kubelet[2520]: E0508 00:48:06.961903 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.962130 kubelet[2520]: E0508 00:48:06.962110 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.962130 kubelet[2520]: W0508 00:48:06.962120 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.962176 kubelet[2520]: E0508 00:48:06.962154 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.962330 kubelet[2520]: E0508 00:48:06.962311 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.962330 kubelet[2520]: W0508 00:48:06.962322 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.962378 kubelet[2520]: E0508 00:48:06.962349 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.962517 kubelet[2520]: E0508 00:48:06.962503 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.962517 kubelet[2520]: W0508 00:48:06.962513 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.962578 kubelet[2520]: E0508 00:48:06.962540 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.962805 kubelet[2520]: E0508 00:48:06.962787 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.962805 kubelet[2520]: W0508 00:48:06.962803 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.962863 kubelet[2520]: E0508 00:48:06.962821 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.963052 kubelet[2520]: E0508 00:48:06.963036 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.963075 kubelet[2520]: W0508 00:48:06.963050 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.963075 kubelet[2520]: E0508 00:48:06.963066 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.963309 kubelet[2520]: E0508 00:48:06.963286 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.963309 kubelet[2520]: W0508 00:48:06.963301 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.963357 kubelet[2520]: E0508 00:48:06.963317 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.963603 kubelet[2520]: E0508 00:48:06.963586 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.963638 kubelet[2520]: W0508 00:48:06.963603 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.963638 kubelet[2520]: E0508 00:48:06.963619 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.963807 kubelet[2520]: E0508 00:48:06.963795 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.963807 kubelet[2520]: W0508 00:48:06.963803 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.963862 kubelet[2520]: E0508 00:48:06.963816 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.964035 kubelet[2520]: E0508 00:48:06.964025 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.964035 kubelet[2520]: W0508 00:48:06.964033 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.964077 kubelet[2520]: E0508 00:48:06.964047 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.964333 kubelet[2520]: E0508 00:48:06.964310 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.964333 kubelet[2520]: W0508 00:48:06.964324 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.964380 kubelet[2520]: E0508 00:48:06.964338 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:06.964561 kubelet[2520]: E0508 00:48:06.964536 2520 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:48:06.964561 kubelet[2520]: W0508 00:48:06.964548 2520 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:48:06.964561 kubelet[2520]: E0508 00:48:06.964556 2520 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:48:07.223594 containerd[1468]: time="2025-05-08T00:48:07.223545741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:07.224334 containerd[1468]: time="2025-05-08T00:48:07.224270675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:48:07.225413 containerd[1468]: time="2025-05-08T00:48:07.225390793Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:07.229398 containerd[1468]: time="2025-05-08T00:48:07.229359305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:07.230105 containerd[1468]: time="2025-05-08T00:48:07.230076154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.389191936s" May 8 00:48:07.230146 containerd[1468]: time="2025-05-08T00:48:07.230107644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:48:07.232023 containerd[1468]: time="2025-05-08T00:48:07.231999044Z" level=info msg="CreateContainer within sandbox \"4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:48:07.253302 containerd[1468]: time="2025-05-08T00:48:07.253251654Z" level=info msg="CreateContainer within sandbox \"4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f\"" May 8 00:48:07.253869 containerd[1468]: time="2025-05-08T00:48:07.253823901Z" level=info msg="StartContainer for \"cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f\"" May 8 00:48:07.286679 systemd[1]: Started cri-containerd-cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f.scope - libcontainer container cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f. May 8 00:48:07.315391 containerd[1468]: time="2025-05-08T00:48:07.315354649Z" level=info msg="StartContainer for \"cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f\" returns successfully" May 8 00:48:07.325408 systemd[1]: cri-containerd-cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f.scope: Deactivated successfully. May 8 00:48:07.845741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f-rootfs.mount: Deactivated successfully. May 8 00:48:07.902310 kubelet[2520]: I0508 00:48:07.902282 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:48:07.902833 kubelet[2520]: E0508 00:48:07.902617 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:07.902833 kubelet[2520]: E0508 00:48:07.902732 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:07.909447 containerd[1468]: time="2025-05-08T00:48:07.906966180Z" level=info msg="shim disconnected" id=cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f namespace=k8s.io May 8 00:48:07.909447 containerd[1468]: time="2025-05-08T00:48:07.909428045Z" level=warning msg="cleaning up after shim disconnected" id=cf67db9dc90c03d5eba17dbd2c383414507730e62a8142e4bc7b1b665826df4f namespace=k8s.io May 8 00:48:07.909447 containerd[1468]: time="2025-05-08T00:48:07.909446970Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:48:07.917105 kubelet[2520]: I0508 00:48:07.916661 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c765c4754-6vzj5" podStartSLOduration=3.035290376 podStartE2EDuration="5.916634071s" podCreationTimestamp="2025-05-08 00:48:02 +0000 UTC" firstStartedPulling="2025-05-08 00:48:02.959200873 +0000 UTC m=+12.187303813" lastFinishedPulling="2025-05-08 00:48:05.840544568 +0000 UTC m=+15.068647508" observedRunningTime="2025-05-08 00:48:06.911770085 +0000 UTC m=+16.139873025" watchObservedRunningTime="2025-05-08 00:48:07.916634071 +0000 UTC m=+17.144737011" May 8 00:48:08.847057 kubelet[2520]: E0508 00:48:08.847003 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:08.908571 kubelet[2520]: E0508 00:48:08.908546 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:08.909084 containerd[1468]: time="2025-05-08T00:48:08.909054009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:48:10.847353 kubelet[2520]: E0508 00:48:10.847297 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:12.848776 kubelet[2520]: E0508 00:48:12.848716 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:13.432694 containerd[1468]: time="2025-05-08T00:48:13.432640810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:13.433485 containerd[1468]: time="2025-05-08T00:48:13.433444181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:48:13.434790 containerd[1468]: time="2025-05-08T00:48:13.434735789Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:13.437133 containerd[1468]: time="2025-05-08T00:48:13.437064649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:13.437642 containerd[1468]: time="2025-05-08T00:48:13.437609854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.528521249s" May 8 00:48:13.437642 containerd[1468]: time="2025-05-08T00:48:13.437638378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:48:13.462602 containerd[1468]: time="2025-05-08T00:48:13.462515306Z" level=info msg="CreateContainer within sandbox \"4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:48:13.477747 containerd[1468]: time="2025-05-08T00:48:13.477711170Z" level=info msg="CreateContainer within sandbox \"4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87\"" May 8 00:48:13.478193 containerd[1468]: time="2025-05-08T00:48:13.478163580Z" level=info msg="StartContainer for \"2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87\"" May 8 00:48:13.508467 systemd[1]: run-containerd-runc-k8s.io-2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87-runc.MqjZt2.mount: Deactivated successfully. May 8 00:48:13.528665 systemd[1]: Started cri-containerd-2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87.scope - libcontainer container 2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87. May 8 00:48:13.562162 containerd[1468]: time="2025-05-08T00:48:13.562108419Z" level=info msg="StartContainer for \"2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87\" returns successfully" May 8 00:48:14.295307 kubelet[2520]: E0508 00:48:14.295263 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:14.849131 kubelet[2520]: E0508 00:48:14.849070 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:15.295615 kubelet[2520]: E0508 00:48:15.295575 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:15.894078 systemd[1]: cri-containerd-2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87.scope: Deactivated successfully. May 8 00:48:15.915222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87-rootfs.mount: Deactivated successfully. May 8 00:48:15.972199 kubelet[2520]: I0508 00:48:15.972150 2520 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:48:16.036043 containerd[1468]: time="2025-05-08T00:48:16.035981559Z" level=info msg="shim disconnected" id=2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87 namespace=k8s.io May 8 00:48:16.036043 containerd[1468]: time="2025-05-08T00:48:16.036033226Z" level=warning msg="cleaning up after shim disconnected" id=2ed3d8c0fcbcf944dac0b78846c8830a0ea75c205f2a69870cb78e3d97ba8f87 namespace=k8s.io May 8 00:48:16.036043 containerd[1468]: time="2025-05-08T00:48:16.036041571Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:48:16.216487 systemd[1]: Created slice kubepods-burstable-podd5677d4f_cfc9_48a7_bd1a_0da37dd788a8.slice - libcontainer container kubepods-burstable-podd5677d4f_cfc9_48a7_bd1a_0da37dd788a8.slice. May 8 00:48:16.235795 systemd[1]: Created slice kubepods-besteffort-pod714dc4ce_e252_47a3_96ad_e69d699c235e.slice - libcontainer container kubepods-besteffort-pod714dc4ce_e252_47a3_96ad_e69d699c235e.slice. May 8 00:48:16.241404 systemd[1]: Created slice kubepods-besteffort-pod8a7556ed_ec9c_47f1_a41d_af868f71d780.slice - libcontainer container kubepods-besteffort-pod8a7556ed_ec9c_47f1_a41d_af868f71d780.slice. May 8 00:48:16.246963 systemd[1]: Created slice kubepods-burstable-podccc040eb_4bcd_483b_a0e3_5bbd655bd91d.slice - libcontainer container kubepods-burstable-podccc040eb_4bcd_483b_a0e3_5bbd655bd91d.slice. May 8 00:48:16.251776 systemd[1]: Created slice kubepods-besteffort-pod22faceee_0d0f_4896_b166_6798291089f0.slice - libcontainer container kubepods-besteffort-pod22faceee_0d0f_4896_b166_6798291089f0.slice. May 8 00:48:16.298347 kubelet[2520]: E0508 00:48:16.298310 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:16.300904 containerd[1468]: time="2025-05-08T00:48:16.300871947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:48:16.331698 kubelet[2520]: I0508 00:48:16.331645 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5677d4f-cfc9-48a7-bd1a-0da37dd788a8-config-volume\") pod \"coredns-6f6b679f8f-p2xjw\" (UID: \"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8\") " pod="kube-system/coredns-6f6b679f8f-p2xjw" May 8 00:48:16.331812 kubelet[2520]: I0508 00:48:16.331699 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4tm\" (UniqueName: \"kubernetes.io/projected/8a7556ed-ec9c-47f1-a41d-af868f71d780-kube-api-access-xs4tm\") pod \"calico-kube-controllers-6b9f64494b-ffr58\" (UID: \"8a7556ed-ec9c-47f1-a41d-af868f71d780\") " pod="calico-system/calico-kube-controllers-6b9f64494b-ffr58" May 8 00:48:16.331812 kubelet[2520]: I0508 00:48:16.331722 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9bcp\" (UniqueName: \"kubernetes.io/projected/d5677d4f-cfc9-48a7-bd1a-0da37dd788a8-kube-api-access-w9bcp\") pod \"coredns-6f6b679f8f-p2xjw\" (UID: \"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8\") " pod="kube-system/coredns-6f6b679f8f-p2xjw" May 8 00:48:16.331812 kubelet[2520]: I0508 00:48:16.331741 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgwn\" (UniqueName: \"kubernetes.io/projected/ccc040eb-4bcd-483b-a0e3-5bbd655bd91d-kube-api-access-txgwn\") pod \"coredns-6f6b679f8f-vxpm5\" (UID: \"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d\") " pod="kube-system/coredns-6f6b679f8f-vxpm5" May 8 00:48:16.331812 kubelet[2520]: I0508 00:48:16.331760 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7556ed-ec9c-47f1-a41d-af868f71d780-tigera-ca-bundle\") pod \"calico-kube-controllers-6b9f64494b-ffr58\" (UID: \"8a7556ed-ec9c-47f1-a41d-af868f71d780\") " pod="calico-system/calico-kube-controllers-6b9f64494b-ffr58" May 8 00:48:16.331812 kubelet[2520]: I0508 00:48:16.331784 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn8cl\" (UniqueName: \"kubernetes.io/projected/714dc4ce-e252-47a3-96ad-e69d699c235e-kube-api-access-xn8cl\") pod \"calico-apiserver-55bbbd787d-nnsrx\" (UID: \"714dc4ce-e252-47a3-96ad-e69d699c235e\") " pod="calico-apiserver/calico-apiserver-55bbbd787d-nnsrx" May 8 00:48:16.332008 kubelet[2520]: I0508 00:48:16.331818 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccc040eb-4bcd-483b-a0e3-5bbd655bd91d-config-volume\") pod \"coredns-6f6b679f8f-vxpm5\" (UID: \"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d\") " pod="kube-system/coredns-6f6b679f8f-vxpm5" May 8 00:48:16.332008 kubelet[2520]: I0508 00:48:16.331841 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22faceee-0d0f-4896-b166-6798291089f0-calico-apiserver-certs\") pod \"calico-apiserver-55bbbd787d-txq5g\" (UID: \"22faceee-0d0f-4896-b166-6798291089f0\") " pod="calico-apiserver/calico-apiserver-55bbbd787d-txq5g" May 8 00:48:16.332008 kubelet[2520]: I0508 00:48:16.331864 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r2bb\" (UniqueName: \"kubernetes.io/projected/22faceee-0d0f-4896-b166-6798291089f0-kube-api-access-8r2bb\") pod \"calico-apiserver-55bbbd787d-txq5g\" (UID: \"22faceee-0d0f-4896-b166-6798291089f0\") " pod="calico-apiserver/calico-apiserver-55bbbd787d-txq5g" May 8 00:48:16.332008 kubelet[2520]: I0508 00:48:16.331883 2520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/714dc4ce-e252-47a3-96ad-e69d699c235e-calico-apiserver-certs\") pod \"calico-apiserver-55bbbd787d-nnsrx\" (UID: \"714dc4ce-e252-47a3-96ad-e69d699c235e\") " pod="calico-apiserver/calico-apiserver-55bbbd787d-nnsrx" May 8 00:48:16.540560 containerd[1468]: time="2025-05-08T00:48:16.540438074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-nnsrx,Uid:714dc4ce-e252-47a3-96ad-e69d699c235e,Namespace:calico-apiserver,Attempt:0,}" May 8 00:48:16.546138 containerd[1468]: time="2025-05-08T00:48:16.546096520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9f64494b-ffr58,Uid:8a7556ed-ec9c-47f1-a41d-af868f71d780,Namespace:calico-system,Attempt:0,}" May 8 00:48:16.549295 kubelet[2520]: E0508 00:48:16.549250 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:16.550558 containerd[1468]: time="2025-05-08T00:48:16.549764264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vxpm5,Uid:ccc040eb-4bcd-483b-a0e3-5bbd655bd91d,Namespace:kube-system,Attempt:0,}" May 8 00:48:16.554881 containerd[1468]: time="2025-05-08T00:48:16.554792645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-txq5g,Uid:22faceee-0d0f-4896-b166-6798291089f0,Namespace:calico-apiserver,Attempt:0,}" May 8 00:48:16.820630 kubelet[2520]: E0508 00:48:16.820487 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:16.821050 containerd[1468]: time="2025-05-08T00:48:16.820995860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p2xjw,Uid:d5677d4f-cfc9-48a7-bd1a-0da37dd788a8,Namespace:kube-system,Attempt:0,}" May 8 00:48:16.853387 systemd[1]: Created slice kubepods-besteffort-pode1a02ecd_8139_4fc8_add6_59265c14dd8e.slice - libcontainer container kubepods-besteffort-pode1a02ecd_8139_4fc8_add6_59265c14dd8e.slice. May 8 00:48:16.855866 containerd[1468]: time="2025-05-08T00:48:16.855824803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6svb9,Uid:e1a02ecd-8139-4fc8-add6-59265c14dd8e,Namespace:calico-system,Attempt:0,}" May 8 00:48:17.718434 containerd[1468]: time="2025-05-08T00:48:17.718319011Z" level=error msg="Failed to destroy network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.718841 containerd[1468]: time="2025-05-08T00:48:17.718668989Z" level=error msg="encountered an error cleaning up failed sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.718841 containerd[1468]: time="2025-05-08T00:48:17.718713763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-nnsrx,Uid:714dc4ce-e252-47a3-96ad-e69d699c235e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.718975 kubelet[2520]: E0508 00:48:17.718932 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.719214 kubelet[2520]: E0508 00:48:17.719011 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbbd787d-nnsrx" May 8 00:48:17.719214 kubelet[2520]: E0508 00:48:17.719031 2520 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbbd787d-nnsrx" May 8 00:48:17.719214 kubelet[2520]: E0508 00:48:17.719077 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbbd787d-nnsrx_calico-apiserver(714dc4ce-e252-47a3-96ad-e69d699c235e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbbd787d-nnsrx_calico-apiserver(714dc4ce-e252-47a3-96ad-e69d699c235e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbbd787d-nnsrx" podUID="714dc4ce-e252-47a3-96ad-e69d699c235e" May 8 00:48:17.802424 containerd[1468]: time="2025-05-08T00:48:17.802366113Z" level=error msg="Failed to destroy network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.802825 containerd[1468]: time="2025-05-08T00:48:17.802795479Z" level=error msg="encountered an error cleaning up failed sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.802867 containerd[1468]: time="2025-05-08T00:48:17.802850563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9f64494b-ffr58,Uid:8a7556ed-ec9c-47f1-a41d-af868f71d780,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.803159 kubelet[2520]: E0508 00:48:17.803106 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.803209 kubelet[2520]: E0508 00:48:17.803177 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9f64494b-ffr58" May 8 00:48:17.803209 kubelet[2520]: E0508 00:48:17.803200 2520 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9f64494b-ffr58" May 8 00:48:17.803274 kubelet[2520]: E0508 00:48:17.803246 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b9f64494b-ffr58_calico-system(8a7556ed-ec9c-47f1-a41d-af868f71d780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b9f64494b-ffr58_calico-system(8a7556ed-ec9c-47f1-a41d-af868f71d780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9f64494b-ffr58" podUID="8a7556ed-ec9c-47f1-a41d-af868f71d780" May 8 00:48:17.917553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765-shm.mount: Deactivated successfully. May 8 00:48:17.917658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a-shm.mount: Deactivated successfully. May 8 00:48:17.971312 containerd[1468]: time="2025-05-08T00:48:17.971190086Z" level=error msg="Failed to destroy network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.972156 containerd[1468]: time="2025-05-08T00:48:17.971625776Z" level=error msg="encountered an error cleaning up failed sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.972156 containerd[1468]: time="2025-05-08T00:48:17.971682111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vxpm5,Uid:ccc040eb-4bcd-483b-a0e3-5bbd655bd91d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.972376 kubelet[2520]: E0508 00:48:17.971923 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:17.972376 kubelet[2520]: E0508 00:48:17.971983 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vxpm5" May 8 00:48:17.972376 kubelet[2520]: E0508 00:48:17.972005 2520 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vxpm5" May 8 00:48:17.972576 kubelet[2520]: E0508 00:48:17.972042 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vxpm5_kube-system(ccc040eb-4bcd-483b-a0e3-5bbd655bd91d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vxpm5_kube-system(ccc040eb-4bcd-483b-a0e3-5bbd655bd91d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vxpm5" podUID="ccc040eb-4bcd-483b-a0e3-5bbd655bd91d" May 8 00:48:17.973727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace-shm.mount: Deactivated successfully. May 8 00:48:18.092898 systemd[1]: Started sshd@7-10.0.0.152:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). May 8 00:48:18.102536 containerd[1468]: time="2025-05-08T00:48:18.102431027Z" level=error msg="Failed to destroy network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.104788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805-shm.mount: Deactivated successfully. May 8 00:48:18.104970 containerd[1468]: time="2025-05-08T00:48:18.104884628Z" level=error msg="encountered an error cleaning up failed sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.104970 containerd[1468]: time="2025-05-08T00:48:18.104952896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-txq5g,Uid:22faceee-0d0f-4896-b166-6798291089f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.105225 kubelet[2520]: E0508 00:48:18.105192 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.105275 kubelet[2520]: E0508 00:48:18.105256 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbbd787d-txq5g" May 8 00:48:18.105312 kubelet[2520]: E0508 00:48:18.105277 2520 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbbd787d-txq5g" May 8 00:48:18.105822 kubelet[2520]: E0508 00:48:18.105324 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbbd787d-txq5g_calico-apiserver(22faceee-0d0f-4896-b166-6798291089f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbbd787d-txq5g_calico-apiserver(22faceee-0d0f-4896-b166-6798291089f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbbd787d-txq5g" podUID="22faceee-0d0f-4896-b166-6798291089f0" May 8 00:48:18.152988 sshd[3427]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:18.154332 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:18.159012 systemd-logind[1450]: New session 8 of user core. May 8 00:48:18.165681 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:48:18.188753 containerd[1468]: time="2025-05-08T00:48:18.188692551Z" level=error msg="Failed to destroy network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.189147 containerd[1468]: time="2025-05-08T00:48:18.189098063Z" level=error msg="encountered an error cleaning up failed sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.189197 containerd[1468]: time="2025-05-08T00:48:18.189167012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p2xjw,Uid:d5677d4f-cfc9-48a7-bd1a-0da37dd788a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.189495 kubelet[2520]: E0508 00:48:18.189436 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.189650 kubelet[2520]: E0508 00:48:18.189518 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-p2xjw" May 8 00:48:18.189650 kubelet[2520]: E0508 00:48:18.189551 2520 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-p2xjw" May 8 00:48:18.189650 kubelet[2520]: E0508 00:48:18.189594 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-p2xjw_kube-system(d5677d4f-cfc9-48a7-bd1a-0da37dd788a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-p2xjw_kube-system(d5677d4f-cfc9-48a7-bd1a-0da37dd788a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-p2xjw" podUID="d5677d4f-cfc9-48a7-bd1a-0da37dd788a8" May 8 00:48:18.201961 containerd[1468]: time="2025-05-08T00:48:18.201918904Z" level=error msg="Failed to destroy network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.202341 containerd[1468]: time="2025-05-08T00:48:18.202309399Z" level=error msg="encountered an error cleaning up failed sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.202399 containerd[1468]: time="2025-05-08T00:48:18.202371184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6svb9,Uid:e1a02ecd-8139-4fc8-add6-59265c14dd8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.202668 kubelet[2520]: E0508 00:48:18.202618 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.202786 kubelet[2520]: E0508 00:48:18.202685 2520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6svb9" May 8 00:48:18.202786 kubelet[2520]: E0508 00:48:18.202707 2520 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6svb9" May 8 00:48:18.202786 kubelet[2520]: E0508 00:48:18.202758 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6svb9_calico-system(e1a02ecd-8139-4fc8-add6-59265c14dd8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6svb9_calico-system(e1a02ecd-8139-4fc8-add6-59265c14dd8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:18.278729 sshd[3427]: pam_unix(sshd:session): session closed for user core May 8 00:48:18.282920 systemd[1]: sshd@7-10.0.0.152:22-10.0.0.1:53800.service: Deactivated successfully. May 8 00:48:18.284861 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:48:18.285647 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. May 8 00:48:18.286500 systemd-logind[1450]: Removed session 8. May 8 00:48:18.302649 kubelet[2520]: I0508 00:48:18.302620 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:18.303241 containerd[1468]: time="2025-05-08T00:48:18.303143575Z" level=info msg="StopPodSandbox for \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\"" May 8 00:48:18.303369 containerd[1468]: time="2025-05-08T00:48:18.303320588Z" level=info msg="Ensure that sandbox b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a in task-service has been cleanup successfully" May 8 00:48:18.303727 kubelet[2520]: I0508 00:48:18.303677 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:18.304505 containerd[1468]: time="2025-05-08T00:48:18.304285541Z" level=info msg="StopPodSandbox for \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\"" May 8 00:48:18.304505 containerd[1468]: time="2025-05-08T00:48:18.304493192Z" level=info msg="Ensure that sandbox 2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf in task-service has been cleanup successfully" May 8 00:48:18.305330 kubelet[2520]: I0508 00:48:18.305315 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:18.305893 containerd[1468]: time="2025-05-08T00:48:18.305869317Z" level=info msg="StopPodSandbox for \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\"" May 8 00:48:18.306296 containerd[1468]: time="2025-05-08T00:48:18.306092346Z" level=info msg="Ensure that sandbox 2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805 in task-service has been cleanup successfully" May 8 00:48:18.306823 kubelet[2520]: I0508 00:48:18.306804 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:18.308003 containerd[1468]: time="2025-05-08T00:48:18.307693195Z" level=info msg="StopPodSandbox for \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\"" May 8 00:48:18.308003 containerd[1468]: time="2025-05-08T00:48:18.307819121Z" level=info msg="Ensure that sandbox 31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b in task-service has been cleanup successfully" May 8 00:48:18.308370 kubelet[2520]: I0508 00:48:18.308346 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:18.310031 kubelet[2520]: I0508 00:48:18.309992 2520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:18.310098 containerd[1468]: time="2025-05-08T00:48:18.310061154Z" level=info msg="StopPodSandbox for \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\"" May 8 00:48:18.310265 containerd[1468]: time="2025-05-08T00:48:18.310240721Z" level=info msg="Ensure that sandbox af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace in task-service has been cleanup successfully" May 8 00:48:18.310540 containerd[1468]: time="2025-05-08T00:48:18.310496963Z" level=info msg="StopPodSandbox for \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\"" May 8 00:48:18.310711 containerd[1468]: time="2025-05-08T00:48:18.310675208Z" level=info msg="Ensure that sandbox f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765 in task-service has been cleanup successfully" May 8 00:48:18.352874 containerd[1468]: time="2025-05-08T00:48:18.352817086Z" level=error msg="StopPodSandbox for \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\" failed" error="failed to destroy network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.353219 kubelet[2520]: E0508 00:48:18.353089 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:18.353219 kubelet[2520]: E0508 00:48:18.353154 2520 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf"} May 8 00:48:18.353323 containerd[1468]: time="2025-05-08T00:48:18.353102893Z" level=error msg="StopPodSandbox for \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\" failed" error="failed to destroy network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.353352 kubelet[2520]: E0508 00:48:18.353222 2520 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1a02ecd-8139-4fc8-add6-59265c14dd8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:48:18.353352 kubelet[2520]: E0508 00:48:18.353247 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1a02ecd-8139-4fc8-add6-59265c14dd8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6svb9" podUID="e1a02ecd-8139-4fc8-add6-59265c14dd8e" May 8 00:48:18.354687 kubelet[2520]: E0508 00:48:18.354659 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:18.354742 kubelet[2520]: E0508 00:48:18.354687 2520 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805"} May 8 00:48:18.354742 kubelet[2520]: E0508 00:48:18.354714 2520 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22faceee-0d0f-4896-b166-6798291089f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:48:18.354742 kubelet[2520]: E0508 00:48:18.354732 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22faceee-0d0f-4896-b166-6798291089f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbbd787d-txq5g" podUID="22faceee-0d0f-4896-b166-6798291089f0" May 8 00:48:18.358976 containerd[1468]: time="2025-05-08T00:48:18.358884107Z" level=error msg="StopPodSandbox for \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\" failed" error="failed to destroy network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.359086 kubelet[2520]: E0508 00:48:18.359058 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:18.359139 kubelet[2520]: E0508 00:48:18.359090 2520 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b"} May 8 00:48:18.359139 kubelet[2520]: E0508 00:48:18.359113 2520 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:48:18.359139 kubelet[2520]: E0508 00:48:18.359130 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-p2xjw" podUID="d5677d4f-cfc9-48a7-bd1a-0da37dd788a8" May 8 00:48:18.360570 containerd[1468]: time="2025-05-08T00:48:18.360110211Z" level=error msg="StopPodSandbox for \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\" failed" error="failed to destroy network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.360610 kubelet[2520]: E0508 00:48:18.360293 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:18.360610 kubelet[2520]: E0508 00:48:18.360427 2520 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a"} May 8 00:48:18.360610 kubelet[2520]: E0508 00:48:18.360449 2520 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"714dc4ce-e252-47a3-96ad-e69d699c235e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:48:18.360610 kubelet[2520]: E0508 00:48:18.360469 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"714dc4ce-e252-47a3-96ad-e69d699c235e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbbd787d-nnsrx" podUID="714dc4ce-e252-47a3-96ad-e69d699c235e" May 8 00:48:18.363487 containerd[1468]: time="2025-05-08T00:48:18.363428997Z" level=error msg="StopPodSandbox for \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\" failed" error="failed to destroy network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.363647 kubelet[2520]: E0508 00:48:18.363625 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:18.363692 kubelet[2520]: E0508 00:48:18.363650 2520 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace"} May 8 00:48:18.363692 kubelet[2520]: E0508 00:48:18.363668 2520 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:48:18.363692 kubelet[2520]: E0508 00:48:18.363685 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vxpm5" podUID="ccc040eb-4bcd-483b-a0e3-5bbd655bd91d" May 8 00:48:18.366992 containerd[1468]: time="2025-05-08T00:48:18.366895752Z" level=error msg="StopPodSandbox for \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\" failed" error="failed to destroy network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:48:18.367066 kubelet[2520]: E0508 00:48:18.367044 2520 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:18.367096 kubelet[2520]: E0508 00:48:18.367071 2520 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765"} May 8 00:48:18.367096 kubelet[2520]: E0508 00:48:18.367090 2520 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a7556ed-ec9c-47f1-a41d-af868f71d780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:48:18.367157 kubelet[2520]: E0508 00:48:18.367107 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a7556ed-ec9c-47f1-a41d-af868f71d780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9f64494b-ffr58" podUID="8a7556ed-ec9c-47f1-a41d-af868f71d780" May 8 00:48:18.915628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf-shm.mount: Deactivated successfully. May 8 00:48:18.915755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b-shm.mount: Deactivated successfully. May 8 00:48:20.819231 kubelet[2520]: I0508 00:48:20.819193 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:48:20.820184 kubelet[2520]: E0508 00:48:20.819580 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:21.316463 kubelet[2520]: E0508 00:48:21.316425 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:22.378512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210004058.mount: Deactivated successfully. May 8 00:48:23.289023 systemd[1]: Started sshd@8-10.0.0.152:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). May 8 00:48:23.698769 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:23.700316 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:23.704320 systemd-logind[1450]: New session 9 of user core. May 8 00:48:23.714673 containerd[1468]: time="2025-05-08T00:48:23.714612127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:23.715650 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:48:23.727816 containerd[1468]: time="2025-05-08T00:48:23.727759560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:48:23.736208 containerd[1468]: time="2025-05-08T00:48:23.736162321Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:23.748985 containerd[1468]: time="2025-05-08T00:48:23.748947163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:23.749574 containerd[1468]: time="2025-05-08T00:48:23.749545356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.448610502s" May 8 00:48:23.749611 containerd[1468]: time="2025-05-08T00:48:23.749576445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:48:23.760274 containerd[1468]: time="2025-05-08T00:48:23.760222460Z" level=info msg="CreateContainer within sandbox \"4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:48:23.846838 sshd[3663]: pam_unix(sshd:session): session closed for user core May 8 00:48:23.851180 systemd[1]: sshd@8-10.0.0.152:22-10.0.0.1:53802.service: Deactivated successfully. May 8 00:48:23.853243 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:48:23.854063 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. May 8 00:48:23.855014 systemd-logind[1450]: Removed session 9. May 8 00:48:23.929662 containerd[1468]: time="2025-05-08T00:48:23.929598796Z" level=info msg="CreateContainer within sandbox \"4d05e4000dbac89deff886b7192de39c4a665110435fbe102b33d274d47355eb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2f453e6f20cf2d577df3759ad1744c46b29041995e13eedbd3de6492322387d3\"" May 8 00:48:23.930369 containerd[1468]: time="2025-05-08T00:48:23.930316263Z" level=info msg="StartContainer for \"2f453e6f20cf2d577df3759ad1744c46b29041995e13eedbd3de6492322387d3\"" May 8 00:48:24.005660 systemd[1]: Started cri-containerd-2f453e6f20cf2d577df3759ad1744c46b29041995e13eedbd3de6492322387d3.scope - libcontainer container 2f453e6f20cf2d577df3759ad1744c46b29041995e13eedbd3de6492322387d3. May 8 00:48:24.380654 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:48:24.380811 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:48:24.383948 containerd[1468]: time="2025-05-08T00:48:24.383893266Z" level=info msg="StartContainer for \"2f453e6f20cf2d577df3759ad1744c46b29041995e13eedbd3de6492322387d3\" returns successfully" May 8 00:48:25.389615 kubelet[2520]: E0508 00:48:25.389569 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:25.442972 kubelet[2520]: I0508 00:48:25.442685 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-shwsq" podStartSLOduration=2.667713755 podStartE2EDuration="23.442661048s" podCreationTimestamp="2025-05-08 00:48:02 +0000 UTC" firstStartedPulling="2025-05-08 00:48:02.975193602 +0000 UTC m=+12.203296543" lastFinishedPulling="2025-05-08 00:48:23.750140896 +0000 UTC m=+32.978243836" observedRunningTime="2025-05-08 00:48:25.442373188 +0000 UTC m=+34.670476128" watchObservedRunningTime="2025-05-08 00:48:25.442661048 +0000 UTC m=+34.670763988" May 8 00:48:26.133564 kernel: bpftool[3878]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:48:26.363606 systemd-networkd[1395]: vxlan.calico: Link UP May 8 00:48:26.363614 systemd-networkd[1395]: vxlan.calico: Gained carrier May 8 00:48:26.392717 kubelet[2520]: I0508 00:48:26.392230 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:48:26.393583 kubelet[2520]: E0508 00:48:26.392799 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:27.843709 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL May 8 00:48:28.861299 systemd[1]: Started sshd@9-10.0.0.152:22-10.0.0.1:44288.service - OpenSSH per-connection server daemon (10.0.0.1:44288). May 8 00:48:28.903200 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 44288 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:28.904980 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:28.908768 systemd-logind[1450]: New session 10 of user core. May 8 00:48:28.918639 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:48:29.048417 sshd[4002]: pam_unix(sshd:session): session closed for user core May 8 00:48:29.052903 systemd[1]: sshd@9-10.0.0.152:22-10.0.0.1:44288.service: Deactivated successfully. May 8 00:48:29.055445 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:48:29.056174 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. May 8 00:48:29.057357 systemd-logind[1450]: Removed session 10. May 8 00:48:29.847389 containerd[1468]: time="2025-05-08T00:48:29.847330027Z" level=info msg="StopPodSandbox for \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\"" May 8 00:48:29.847923 containerd[1468]: time="2025-05-08T00:48:29.847329987Z" level=info msg="StopPodSandbox for \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\"" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.896 [INFO][4049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.897 [INFO][4049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" iface="eth0" netns="/var/run/netns/cni-91d4ff96-8245-5421-5236-2cd5cc9ad39f" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.897 [INFO][4049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" iface="eth0" netns="/var/run/netns/cni-91d4ff96-8245-5421-5236-2cd5cc9ad39f" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.898 [INFO][4049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" iface="eth0" netns="/var/run/netns/cni-91d4ff96-8245-5421-5236-2cd5cc9ad39f" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.898 [INFO][4049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.898 [INFO][4049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.951 [INFO][4065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.952 [INFO][4065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.952 [INFO][4065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.958 [WARNING][4065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.958 [INFO][4065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.960 [INFO][4065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:29.964533 containerd[1468]: 2025-05-08 00:48:29.962 [INFO][4049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:29.965645 containerd[1468]: time="2025-05-08T00:48:29.965262480Z" level=info msg="TearDown network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\" successfully" May 8 00:48:29.965645 containerd[1468]: time="2025-05-08T00:48:29.965323165Z" level=info msg="StopPodSandbox for \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\" returns successfully" May 8 00:48:29.967484 systemd[1]: run-netns-cni\x2d91d4ff96\x2d8245\x2d5421\x2d5236\x2d2cd5cc9ad39f.mount: Deactivated successfully. May 8 00:48:29.968793 containerd[1468]: time="2025-05-08T00:48:29.968754887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9f64494b-ffr58,Uid:8a7556ed-ec9c-47f1-a41d-af868f71d780,Namespace:calico-system,Attempt:1,}" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.896 [INFO][4048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.897 [INFO][4048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" iface="eth0" netns="/var/run/netns/cni-deab8bbf-acfe-f3ae-7a54-ce86394edf5e" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.897 [INFO][4048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" iface="eth0" netns="/var/run/netns/cni-deab8bbf-acfe-f3ae-7a54-ce86394edf5e" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.898 [INFO][4048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" iface="eth0" netns="/var/run/netns/cni-deab8bbf-acfe-f3ae-7a54-ce86394edf5e" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.898 [INFO][4048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.898 [INFO][4048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.952 [INFO][4066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.952 [INFO][4066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.960 [INFO][4066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.964 [WARNING][4066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.964 [INFO][4066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.965 [INFO][4066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:29.970717 containerd[1468]: 2025-05-08 00:48:29.968 [INFO][4048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:29.971129 containerd[1468]: time="2025-05-08T00:48:29.971090822Z" level=info msg="TearDown network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\" successfully" May 8 00:48:29.971129 containerd[1468]: time="2025-05-08T00:48:29.971110068Z" level=info msg="StopPodSandbox for \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\" returns successfully" May 8 00:48:29.971530 containerd[1468]: time="2025-05-08T00:48:29.971503437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-txq5g,Uid:22faceee-0d0f-4896-b166-6798291089f0,Namespace:calico-apiserver,Attempt:1,}" May 8 00:48:29.973107 systemd[1]: run-netns-cni\x2ddeab8bbf\x2dacfe\x2df3ae\x2d7a54\x2dce86394edf5e.mount: Deactivated successfully. May 8 00:48:30.103450 systemd-networkd[1395]: calid3219577cfa: Link UP May 8 00:48:30.104344 systemd-networkd[1395]: calid3219577cfa: Gained carrier May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.037 [INFO][4079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0 calico-kube-controllers-6b9f64494b- calico-system 8a7556ed-ec9c-47f1-a41d-af868f71d780 842 0 2025-05-08 00:48:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b9f64494b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b9f64494b-ffr58 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3219577cfa [] []}} ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.037 [INFO][4079] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.067 [INFO][4107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" HandleID="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.073 [INFO][4107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" HandleID="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002880a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b9f64494b-ffr58", "timestamp":"2025-05-08 00:48:30.067314524 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.073 [INFO][4107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.073 [INFO][4107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.074 [INFO][4107] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.077 [INFO][4107] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.081 [INFO][4107] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.085 [INFO][4107] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.086 [INFO][4107] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.087 [INFO][4107] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.087 [INFO][4107] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.088 [INFO][4107] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508 May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.093 [INFO][4107] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.096 [INFO][4107] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.097 [INFO][4107] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" host="localhost" May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.097 [INFO][4107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:30.115941 containerd[1468]: 2025-05-08 00:48:30.097 [INFO][4107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" HandleID="k8s-pod-network.1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:30.116447 containerd[1468]: 2025-05-08 00:48:30.100 [INFO][4079] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0", GenerateName:"calico-kube-controllers-6b9f64494b-", Namespace:"calico-system", SelfLink:"", UID:"8a7556ed-ec9c-47f1-a41d-af868f71d780", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9f64494b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b9f64494b-ffr58", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3219577cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:30.116447 containerd[1468]: 2025-05-08 00:48:30.100 [INFO][4079] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:30.116447 containerd[1468]: 2025-05-08 00:48:30.100 [INFO][4079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3219577cfa ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:30.116447 containerd[1468]: 2025-05-08 00:48:30.104 [INFO][4079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:30.116447 containerd[1468]: 2025-05-08 00:48:30.105 [INFO][4079] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0", GenerateName:"calico-kube-controllers-6b9f64494b-", Namespace:"calico-system", SelfLink:"", UID:"8a7556ed-ec9c-47f1-a41d-af868f71d780", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9f64494b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508", Pod:"calico-kube-controllers-6b9f64494b-ffr58", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3219577cfa", MAC:"9e:11:15:50:81:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:30.116447 containerd[1468]: 2025-05-08 00:48:30.113 [INFO][4079] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508" Namespace="calico-system" Pod="calico-kube-controllers-6b9f64494b-ffr58" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:30.144343 containerd[1468]: time="2025-05-08T00:48:30.144261835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:30.144343 containerd[1468]: time="2025-05-08T00:48:30.144313833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:30.144343 containerd[1468]: time="2025-05-08T00:48:30.144327979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:30.144495 containerd[1468]: time="2025-05-08T00:48:30.144418469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:30.166659 systemd[1]: Started cri-containerd-1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508.scope - libcontainer container 1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508. May 8 00:48:30.177082 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:30.201584 containerd[1468]: time="2025-05-08T00:48:30.201532567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9f64494b-ffr58,Uid:8a7556ed-ec9c-47f1-a41d-af868f71d780,Namespace:calico-system,Attempt:1,} returns sandbox id \"1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508\"" May 8 00:48:30.203428 containerd[1468]: time="2025-05-08T00:48:30.203386708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:48:30.211461 systemd-networkd[1395]: cali53b19abc5f7: Link UP May 8 00:48:30.211695 systemd-networkd[1395]: cali53b19abc5f7: Gained carrier May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.044 [INFO][4090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0 calico-apiserver-55bbbd787d- calico-apiserver 22faceee-0d0f-4896-b166-6798291089f0 841 0 2025-05-08 00:48:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55bbbd787d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55bbbd787d-txq5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali53b19abc5f7 [] []}} ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.045 [INFO][4090] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.073 [INFO][4113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" HandleID="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.080 [INFO][4113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" HandleID="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000435620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55bbbd787d-txq5g", "timestamp":"2025-05-08 00:48:30.073266547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.080 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.097 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.097 [INFO][4113] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.178 [INFO][4113] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.182 [INFO][4113] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.185 [INFO][4113] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.187 [INFO][4113] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.188 [INFO][4113] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.189 [INFO][4113] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.190 [INFO][4113] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80 May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.198 [INFO][4113] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.204 [INFO][4113] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.204 [INFO][4113] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" host="localhost" May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.204 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:30.223705 containerd[1468]: 2025-05-08 00:48:30.204 [INFO][4113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" HandleID="k8s-pod-network.4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:30.224448 containerd[1468]: 2025-05-08 00:48:30.208 [INFO][4090] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"22faceee-0d0f-4896-b166-6798291089f0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55bbbd787d-txq5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53b19abc5f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:30.224448 containerd[1468]: 2025-05-08 00:48:30.208 [INFO][4090] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:30.224448 containerd[1468]: 2025-05-08 00:48:30.208 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53b19abc5f7 ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:30.224448 containerd[1468]: 2025-05-08 00:48:30.211 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:30.224448 containerd[1468]: 2025-05-08 00:48:30.212 [INFO][4090] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"22faceee-0d0f-4896-b166-6798291089f0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80", Pod:"calico-apiserver-55bbbd787d-txq5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53b19abc5f7", MAC:"1e:a3:18:31:76:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:30.224448 containerd[1468]: 2025-05-08 00:48:30.220 [INFO][4090] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-txq5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:30.246653 containerd[1468]: time="2025-05-08T00:48:30.245564217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:30.246653 containerd[1468]: time="2025-05-08T00:48:30.246313534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:30.246653 containerd[1468]: time="2025-05-08T00:48:30.246326668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:30.246653 containerd[1468]: time="2025-05-08T00:48:30.246477100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:30.268653 systemd[1]: Started cri-containerd-4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80.scope - libcontainer container 4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80. May 8 00:48:30.281094 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:30.303905 containerd[1468]: time="2025-05-08T00:48:30.303795293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-txq5g,Uid:22faceee-0d0f-4896-b166-6798291089f0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80\"" May 8 00:48:30.847799 containerd[1468]: time="2025-05-08T00:48:30.847739264Z" level=info msg="StopPodSandbox for \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\"" May 8 00:48:30.848180 containerd[1468]: time="2025-05-08T00:48:30.847792744Z" level=info msg="StopPodSandbox for \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\"" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.894 [INFO][4265] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.894 [INFO][4265] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" iface="eth0" netns="/var/run/netns/cni-0403ebba-0f55-3b96-b434-70d471064721" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.895 [INFO][4265] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" iface="eth0" netns="/var/run/netns/cni-0403ebba-0f55-3b96-b434-70d471064721" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.895 [INFO][4265] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" iface="eth0" netns="/var/run/netns/cni-0403ebba-0f55-3b96-b434-70d471064721" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.895 [INFO][4265] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.895 [INFO][4265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.915 [INFO][4288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.915 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.915 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.921 [WARNING][4288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.921 [INFO][4288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.922 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:30.926693 containerd[1468]: 2025-05-08 00:48:30.924 [INFO][4265] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:30.927842 containerd[1468]: time="2025-05-08T00:48:30.926925688Z" level=info msg="TearDown network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\" successfully" May 8 00:48:30.927842 containerd[1468]: time="2025-05-08T00:48:30.926955755Z" level=info msg="StopPodSandbox for \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\" returns successfully" May 8 00:48:30.927842 containerd[1468]: time="2025-05-08T00:48:30.927584806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-nnsrx,Uid:714dc4ce-e252-47a3-96ad-e69d699c235e,Namespace:calico-apiserver,Attempt:1,}" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.896 [INFO][4274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.896 [INFO][4274] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" iface="eth0" netns="/var/run/netns/cni-68d74fd1-aafe-b6b1-ac33-bf8d9bd83ed1" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.897 [INFO][4274] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" iface="eth0" netns="/var/run/netns/cni-68d74fd1-aafe-b6b1-ac33-bf8d9bd83ed1" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.897 [INFO][4274] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" iface="eth0" netns="/var/run/netns/cni-68d74fd1-aafe-b6b1-ac33-bf8d9bd83ed1" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.897 [INFO][4274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.897 [INFO][4274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.916 [INFO][4290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.916 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.922 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.927 [WARNING][4290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.927 [INFO][4290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.928 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:30.933412 containerd[1468]: 2025-05-08 00:48:30.931 [INFO][4274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:30.933833 containerd[1468]: time="2025-05-08T00:48:30.933583126Z" level=info msg="TearDown network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\" successfully" May 8 00:48:30.933833 containerd[1468]: time="2025-05-08T00:48:30.933610738Z" level=info msg="StopPodSandbox for \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\" returns successfully" May 8 00:48:30.933945 kubelet[2520]: E0508 00:48:30.933921 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:30.934432 containerd[1468]: time="2025-05-08T00:48:30.934408164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vxpm5,Uid:ccc040eb-4bcd-483b-a0e3-5bbd655bd91d,Namespace:kube-system,Attempt:1,}" May 8 00:48:30.970051 systemd[1]: run-netns-cni\x2d68d74fd1\x2daafe\x2db6b1\x2dac33\x2dbf8d9bd83ed1.mount: Deactivated successfully. May 8 00:48:30.970189 systemd[1]: run-netns-cni\x2d0403ebba\x2d0f55\x2d3b96\x2db434\x2d70d471064721.mount: Deactivated successfully. May 8 00:48:31.270399 systemd-networkd[1395]: calie7c1c57f152: Link UP May 8 00:48:31.270658 systemd-networkd[1395]: calie7c1c57f152: Gained carrier May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.208 [INFO][4305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0 calico-apiserver-55bbbd787d- calico-apiserver 714dc4ce-e252-47a3-96ad-e69d699c235e 859 0 2025-05-08 00:48:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55bbbd787d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55bbbd787d-nnsrx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7c1c57f152 [] []}} ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.208 [INFO][4305] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.237 [INFO][4334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" HandleID="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.244 [INFO][4334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" HandleID="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55bbbd787d-nnsrx", "timestamp":"2025-05-08 00:48:31.237621706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.244 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.244 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.244 [INFO][4334] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.246 [INFO][4334] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.249 [INFO][4334] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.253 [INFO][4334] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.254 [INFO][4334] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.256 [INFO][4334] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.256 [INFO][4334] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.257 [INFO][4334] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5 May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.260 [INFO][4334] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.265 [INFO][4334] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.265 [INFO][4334] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" host="localhost" May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.265 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:31.282698 containerd[1468]: 2025-05-08 00:48:31.265 [INFO][4334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" HandleID="k8s-pod-network.342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:31.283250 containerd[1468]: 2025-05-08 00:48:31.267 [INFO][4305] cni-plugin/k8s.go 386: Populated endpoint ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"714dc4ce-e252-47a3-96ad-e69d699c235e", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55bbbd787d-nnsrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7c1c57f152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:31.283250 containerd[1468]: 2025-05-08 00:48:31.268 [INFO][4305] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:31.283250 containerd[1468]: 2025-05-08 00:48:31.268 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7c1c57f152 ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:31.283250 containerd[1468]: 2025-05-08 00:48:31.270 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:31.283250 containerd[1468]: 2025-05-08 00:48:31.270 [INFO][4305] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"714dc4ce-e252-47a3-96ad-e69d699c235e", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5", Pod:"calico-apiserver-55bbbd787d-nnsrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7c1c57f152", MAC:"a2:2e:7a:4c:cf:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:31.283250 containerd[1468]: 2025-05-08 00:48:31.280 [INFO][4305] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbbd787d-nnsrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:31.320116 containerd[1468]: time="2025-05-08T00:48:31.318912223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:31.320116 containerd[1468]: time="2025-05-08T00:48:31.318959622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:31.320116 containerd[1468]: time="2025-05-08T00:48:31.318970753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:31.320116 containerd[1468]: time="2025-05-08T00:48:31.319048368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:31.345676 systemd[1]: Started cri-containerd-342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5.scope - libcontainer container 342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5. May 8 00:48:31.358853 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:31.385160 containerd[1468]: time="2025-05-08T00:48:31.385089218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbbd787d-nnsrx,Uid:714dc4ce-e252-47a3-96ad-e69d699c235e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5\"" May 8 00:48:31.427754 systemd-networkd[1395]: cali53b19abc5f7: Gained IPv6LL May 8 00:48:31.542648 systemd-networkd[1395]: cali5c83f527eb0: Link UP May 8 00:48:31.542897 systemd-networkd[1395]: cali5c83f527eb0: Gained carrier May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.215 [INFO][4318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0 coredns-6f6b679f8f- kube-system ccc040eb-4bcd-483b-a0e3-5bbd655bd91d 860 0 2025-05-08 00:47:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-vxpm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c83f527eb0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.215 [INFO][4318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.239 [INFO][4340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" HandleID="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.245 [INFO][4340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" HandleID="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027dee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-vxpm5", "timestamp":"2025-05-08 00:48:31.239133083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.245 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.265 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.265 [INFO][4340] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.347 [INFO][4340] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.351 [INFO][4340] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.355 [INFO][4340] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.356 [INFO][4340] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.358 [INFO][4340] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.358 [INFO][4340] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.359 [INFO][4340] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059 May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.469 [INFO][4340] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.537 [INFO][4340] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.537 [INFO][4340] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" host="localhost" May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.537 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:31.559009 containerd[1468]: 2025-05-08 00:48:31.537 [INFO][4340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" HandleID="k8s-pod-network.83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:31.560712 containerd[1468]: 2025-05-08 00:48:31.540 [INFO][4318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-vxpm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c83f527eb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:31.560712 containerd[1468]: 2025-05-08 00:48:31.540 [INFO][4318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:31.560712 containerd[1468]: 2025-05-08 00:48:31.540 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c83f527eb0 ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:31.560712 containerd[1468]: 2025-05-08 00:48:31.542 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:31.560712 containerd[1468]: 2025-05-08 00:48:31.542 [INFO][4318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059", Pod:"coredns-6f6b679f8f-vxpm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c83f527eb0", MAC:"16:1a:10:71:ac:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:31.560712 containerd[1468]: 2025-05-08 00:48:31.555 [INFO][4318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059" Namespace="kube-system" Pod="coredns-6f6b679f8f-vxpm5" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:31.585762 containerd[1468]: time="2025-05-08T00:48:31.585637803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:31.585762 containerd[1468]: time="2025-05-08T00:48:31.585728363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:31.585762 containerd[1468]: time="2025-05-08T00:48:31.585747850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:31.585935 containerd[1468]: time="2025-05-08T00:48:31.585856163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:31.603661 systemd[1]: Started cri-containerd-83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059.scope - libcontainer container 83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059. May 8 00:48:31.615777 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:31.637957 containerd[1468]: time="2025-05-08T00:48:31.637907382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vxpm5,Uid:ccc040eb-4bcd-483b-a0e3-5bbd655bd91d,Namespace:kube-system,Attempt:1,} returns sandbox id \"83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059\"" May 8 00:48:31.638686 kubelet[2520]: E0508 00:48:31.638663 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:31.640564 containerd[1468]: time="2025-05-08T00:48:31.640500069Z" level=info msg="CreateContainer within sandbox \"83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:48:31.670711 containerd[1468]: time="2025-05-08T00:48:31.670636039Z" level=info msg="CreateContainer within sandbox \"83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41c8263f7db4ae64b87e58095cd26832d20f3401122d10d832ce29462802cced\"" May 8 00:48:31.671350 containerd[1468]: time="2025-05-08T00:48:31.671292552Z" level=info msg="StartContainer for \"41c8263f7db4ae64b87e58095cd26832d20f3401122d10d832ce29462802cced\"" May 8 00:48:31.702666 systemd[1]: Started cri-containerd-41c8263f7db4ae64b87e58095cd26832d20f3401122d10d832ce29462802cced.scope - libcontainer container 41c8263f7db4ae64b87e58095cd26832d20f3401122d10d832ce29462802cced. May 8 00:48:31.744438 containerd[1468]: time="2025-05-08T00:48:31.743179205Z" level=info msg="StartContainer for \"41c8263f7db4ae64b87e58095cd26832d20f3401122d10d832ce29462802cced\" returns successfully" May 8 00:48:31.848735 containerd[1468]: time="2025-05-08T00:48:31.848311377Z" level=info msg="StopPodSandbox for \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\"" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.050 [INFO][4519] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.050 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" iface="eth0" netns="/var/run/netns/cni-805b0ed9-ddda-ec1c-a7cd-1e7a4a6ce795" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.050 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" iface="eth0" netns="/var/run/netns/cni-805b0ed9-ddda-ec1c-a7cd-1e7a4a6ce795" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.051 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" iface="eth0" netns="/var/run/netns/cni-805b0ed9-ddda-ec1c-a7cd-1e7a4a6ce795" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.051 [INFO][4519] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.051 [INFO][4519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.086 [INFO][4528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.086 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.086 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.092 [WARNING][4528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.092 [INFO][4528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.093 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:32.098815 containerd[1468]: 2025-05-08 00:48:32.095 [INFO][4519] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:32.101649 containerd[1468]: time="2025-05-08T00:48:32.101607049Z" level=info msg="TearDown network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\" successfully" May 8 00:48:32.101649 containerd[1468]: time="2025-05-08T00:48:32.101642225Z" level=info msg="StopPodSandbox for \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\" returns successfully" May 8 00:48:32.101902 systemd[1]: run-netns-cni\x2d805b0ed9\x2dddda\x2dec1c\x2da7cd\x2d1e7a4a6ce795.mount: Deactivated successfully. May 8 00:48:32.102238 kubelet[2520]: E0508 00:48:32.102037 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:32.102511 containerd[1468]: time="2025-05-08T00:48:32.102419814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p2xjw,Uid:d5677d4f-cfc9-48a7-bd1a-0da37dd788a8,Namespace:kube-system,Attempt:1,}" May 8 00:48:32.131716 systemd-networkd[1395]: calid3219577cfa: Gained IPv6LL May 8 00:48:32.415301 kubelet[2520]: E0508 00:48:32.414232 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:32.449761 kubelet[2520]: I0508 00:48:32.449486 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vxpm5" podStartSLOduration=36.449463274 podStartE2EDuration="36.449463274s" podCreationTimestamp="2025-05-08 00:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:48:32.431720149 +0000 UTC m=+41.659823089" watchObservedRunningTime="2025-05-08 00:48:32.449463274 +0000 UTC m=+41.677566214" May 8 00:48:32.534367 systemd-networkd[1395]: calibc46683f00f: Link UP May 8 00:48:32.535160 systemd-networkd[1395]: calibc46683f00f: Gained carrier May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.454 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0 coredns-6f6b679f8f- kube-system d5677d4f-cfc9-48a7-bd1a-0da37dd788a8 882 0 2025-05-08 00:47:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-p2xjw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc46683f00f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.454 [INFO][4543] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.490 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" HandleID="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.499 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" HandleID="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003756d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-p2xjw", "timestamp":"2025-05-08 00:48:32.490699147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.500 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.500 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.500 [INFO][4560] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.502 [INFO][4560] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.506 [INFO][4560] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.510 [INFO][4560] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.512 [INFO][4560] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.514 [INFO][4560] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.514 [INFO][4560] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.516 [INFO][4560] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8 May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.521 [INFO][4560] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.528 [INFO][4560] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.528 [INFO][4560] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" host="localhost" May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.528 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:32.555178 containerd[1468]: 2025-05-08 00:48:32.528 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" HandleID="k8s-pod-network.8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.556049 containerd[1468]: 2025-05-08 00:48:32.531 [INFO][4543] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-p2xjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc46683f00f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:32.556049 containerd[1468]: 2025-05-08 00:48:32.531 [INFO][4543] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.556049 containerd[1468]: 2025-05-08 00:48:32.531 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc46683f00f ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.556049 containerd[1468]: 2025-05-08 00:48:32.536 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.556049 containerd[1468]: 2025-05-08 00:48:32.536 [INFO][4543] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8", Pod:"coredns-6f6b679f8f-p2xjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc46683f00f", MAC:"0e:0a:1b:92:5d:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:32.556049 containerd[1468]: 2025-05-08 00:48:32.550 [INFO][4543] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8" Namespace="kube-system" Pod="coredns-6f6b679f8f-p2xjw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:32.665198 containerd[1468]: time="2025-05-08T00:48:32.664934401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:32.667157 containerd[1468]: time="2025-05-08T00:48:32.665848777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:32.667157 containerd[1468]: time="2025-05-08T00:48:32.665880337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:32.667157 containerd[1468]: time="2025-05-08T00:48:32.665983019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:32.693785 systemd[1]: Started cri-containerd-8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8.scope - libcontainer container 8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8. May 8 00:48:32.706441 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:32.733021 containerd[1468]: time="2025-05-08T00:48:32.732977624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p2xjw,Uid:d5677d4f-cfc9-48a7-bd1a-0da37dd788a8,Namespace:kube-system,Attempt:1,} returns sandbox id \"8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8\"" May 8 00:48:32.733966 kubelet[2520]: E0508 00:48:32.733943 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:32.735928 containerd[1468]: time="2025-05-08T00:48:32.735897054Z" level=info msg="CreateContainer within sandbox \"8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:48:32.771682 systemd-networkd[1395]: calie7c1c57f152: Gained IPv6LL May 8 00:48:32.839239 containerd[1468]: time="2025-05-08T00:48:32.839176398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:32.848114 containerd[1468]: time="2025-05-08T00:48:32.848035928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:48:32.850409 containerd[1468]: time="2025-05-08T00:48:32.849753472Z" level=info msg="StopPodSandbox for \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\"" May 8 00:48:32.850409 containerd[1468]: time="2025-05-08T00:48:32.850195642Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:32.858936 containerd[1468]: time="2025-05-08T00:48:32.858891936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:32.859075 containerd[1468]: time="2025-05-08T00:48:32.858997784Z" level=info msg="CreateContainer within sandbox \"8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc774036014d4fea8a0d72730c2161e253f80af34159be8fb68d80cc1756eb26\"" May 8 00:48:32.859555 containerd[1468]: time="2025-05-08T00:48:32.859329397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.65590583s" May 8 00:48:32.859555 containerd[1468]: time="2025-05-08T00:48:32.859362479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:48:32.859695 containerd[1468]: time="2025-05-08T00:48:32.859663023Z" level=info msg="StartContainer for \"bc774036014d4fea8a0d72730c2161e253f80af34159be8fb68d80cc1756eb26\"" May 8 00:48:32.862828 containerd[1468]: time="2025-05-08T00:48:32.862512191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:48:32.876956 containerd[1468]: time="2025-05-08T00:48:32.876912542Z" level=info msg="CreateContainer within sandbox \"1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:48:32.909567 systemd[1]: Started cri-containerd-bc774036014d4fea8a0d72730c2161e253f80af34159be8fb68d80cc1756eb26.scope - libcontainer container bc774036014d4fea8a0d72730c2161e253f80af34159be8fb68d80cc1756eb26. May 8 00:48:32.912487 containerd[1468]: time="2025-05-08T00:48:32.911571750Z" level=info msg="CreateContainer within sandbox \"1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1c47a22d85710b08768735632d7ad24ab0861b55cf1bc6b44d64ff22e77487d1\"" May 8 00:48:32.913172 containerd[1468]: time="2025-05-08T00:48:32.913131117Z" level=info msg="StartContainer for \"1c47a22d85710b08768735632d7ad24ab0861b55cf1bc6b44d64ff22e77487d1\"" May 8 00:48:32.965721 systemd[1]: Started cri-containerd-1c47a22d85710b08768735632d7ad24ab0861b55cf1bc6b44d64ff22e77487d1.scope - libcontainer container 1c47a22d85710b08768735632d7ad24ab0861b55cf1bc6b44d64ff22e77487d1. May 8 00:48:32.977645 containerd[1468]: time="2025-05-08T00:48:32.977358117Z" level=info msg="StartContainer for \"bc774036014d4fea8a0d72730c2161e253f80af34159be8fb68d80cc1756eb26\" returns successfully" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.909 [INFO][4644] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.909 [INFO][4644] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" iface="eth0" netns="/var/run/netns/cni-4eddaa43-c878-eed1-88fb-698afb66fa6f" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.909 [INFO][4644] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" iface="eth0" netns="/var/run/netns/cni-4eddaa43-c878-eed1-88fb-698afb66fa6f" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.911 [INFO][4644] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" iface="eth0" netns="/var/run/netns/cni-4eddaa43-c878-eed1-88fb-698afb66fa6f" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.911 [INFO][4644] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.911 [INFO][4644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.961 [INFO][4677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.962 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.962 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.972 [WARNING][4677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.972 [INFO][4677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.973 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:32.982300 containerd[1468]: 2025-05-08 00:48:32.978 [INFO][4644] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:32.984207 containerd[1468]: time="2025-05-08T00:48:32.983710891Z" level=info msg="TearDown network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\" successfully" May 8 00:48:32.984207 containerd[1468]: time="2025-05-08T00:48:32.983754172Z" level=info msg="StopPodSandbox for \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\" returns successfully" May 8 00:48:32.986214 containerd[1468]: time="2025-05-08T00:48:32.986179606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6svb9,Uid:e1a02ecd-8139-4fc8-add6-59265c14dd8e,Namespace:calico-system,Attempt:1,}" May 8 00:48:32.987158 systemd[1]: run-netns-cni\x2d4eddaa43\x2dc878\x2deed1\x2d88fb\x2d698afb66fa6f.mount: Deactivated successfully. May 8 00:48:33.027788 systemd-networkd[1395]: cali5c83f527eb0: Gained IPv6LL May 8 00:48:33.061376 containerd[1468]: time="2025-05-08T00:48:33.061305117Z" level=info msg="StartContainer for \"1c47a22d85710b08768735632d7ad24ab0861b55cf1bc6b44d64ff22e77487d1\" returns successfully" May 8 00:48:33.116890 systemd-networkd[1395]: cali80e86734f90: Link UP May 8 00:48:33.117537 systemd-networkd[1395]: cali80e86734f90: Gained carrier May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.047 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6svb9-eth0 csi-node-driver- calico-system e1a02ecd-8139-4fc8-add6-59265c14dd8e 906 0 2025-05-08 00:48:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6svb9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali80e86734f90 [] []}} ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.048 [INFO][4730] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.075 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" HandleID="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.083 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" HandleID="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6svb9", "timestamp":"2025-05-08 00:48:33.075839678 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.083 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.083 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.083 [INFO][4751] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.085 [INFO][4751] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.089 [INFO][4751] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.094 [INFO][4751] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.096 [INFO][4751] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.098 [INFO][4751] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.098 [INFO][4751] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.100 [INFO][4751] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1 May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.103 [INFO][4751] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.109 [INFO][4751] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.109 [INFO][4751] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" host="localhost" May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.109 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:33.151102 containerd[1468]: 2025-05-08 00:48:33.109 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" HandleID="k8s-pod-network.492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:33.151745 containerd[1468]: 2025-05-08 00:48:33.113 [INFO][4730] cni-plugin/k8s.go 386: Populated endpoint ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6svb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1a02ecd-8139-4fc8-add6-59265c14dd8e", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6svb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80e86734f90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:33.151745 containerd[1468]: 2025-05-08 00:48:33.113 [INFO][4730] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:33.151745 containerd[1468]: 2025-05-08 00:48:33.113 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80e86734f90 ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:33.151745 containerd[1468]: 2025-05-08 00:48:33.117 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:33.151745 containerd[1468]: 2025-05-08 00:48:33.118 [INFO][4730] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6svb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1a02ecd-8139-4fc8-add6-59265c14dd8e", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1", Pod:"csi-node-driver-6svb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80e86734f90", MAC:"c2:ad:5c:da:ac:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:33.151745 containerd[1468]: 2025-05-08 00:48:33.145 [INFO][4730] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1" Namespace="calico-system" Pod="csi-node-driver-6svb9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:33.184730 containerd[1468]: time="2025-05-08T00:48:33.184418701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:33.184730 containerd[1468]: time="2025-05-08T00:48:33.184483182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:33.184730 containerd[1468]: time="2025-05-08T00:48:33.184496828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:33.184730 containerd[1468]: time="2025-05-08T00:48:33.184623505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:33.213666 systemd[1]: Started cri-containerd-492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1.scope - libcontainer container 492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1. May 8 00:48:33.226055 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:33.238115 containerd[1468]: time="2025-05-08T00:48:33.238058743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6svb9,Uid:e1a02ecd-8139-4fc8-add6-59265c14dd8e,Namespace:calico-system,Attempt:1,} returns sandbox id \"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1\"" May 8 00:48:33.419314 kubelet[2520]: E0508 00:48:33.419210 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:33.421258 kubelet[2520]: E0508 00:48:33.421240 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:33.444613 kubelet[2520]: I0508 00:48:33.444119 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-p2xjw" podStartSLOduration=37.444094412 podStartE2EDuration="37.444094412s" podCreationTimestamp="2025-05-08 00:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:48:33.432851569 +0000 UTC m=+42.660954509" watchObservedRunningTime="2025-05-08 00:48:33.444094412 +0000 UTC m=+42.672197352" May 8 00:48:33.444613 kubelet[2520]: I0508 00:48:33.444239 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b9f64494b-ffr58" podStartSLOduration=28.786364728 podStartE2EDuration="31.444236098s" podCreationTimestamp="2025-05-08 00:48:02 +0000 UTC" firstStartedPulling="2025-05-08 00:48:30.202974885 +0000 UTC m=+39.431077825" lastFinishedPulling="2025-05-08 00:48:32.860846254 +0000 UTC m=+42.088949195" observedRunningTime="2025-05-08 00:48:33.443269714 +0000 UTC m=+42.671372654" watchObservedRunningTime="2025-05-08 00:48:33.444236098 +0000 UTC m=+42.672339038" May 8 00:48:34.062713 systemd[1]: Started sshd@10-10.0.0.152:22-10.0.0.1:44290.service - OpenSSH per-connection server daemon (10.0.0.1:44290). May 8 00:48:34.105974 sshd[4846]: Accepted publickey for core from 10.0.0.1 port 44290 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:34.107750 sshd[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:34.111885 systemd-logind[1450]: New session 11 of user core. May 8 00:48:34.122639 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:48:34.425987 kubelet[2520]: E0508 00:48:34.425945 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:34.427121 kubelet[2520]: E0508 00:48:34.426125 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:34.428046 sshd[4846]: pam_unix(sshd:session): session closed for user core May 8 00:48:34.437243 systemd[1]: sshd@10-10.0.0.152:22-10.0.0.1:44290.service: Deactivated successfully. May 8 00:48:34.437351 systemd-networkd[1395]: calibc46683f00f: Gained IPv6LL May 8 00:48:34.440720 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:48:34.445277 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. May 8 00:48:34.458335 systemd[1]: Started sshd@11-10.0.0.152:22-10.0.0.1:44296.service - OpenSSH per-connection server daemon (10.0.0.1:44296). May 8 00:48:34.459687 systemd-logind[1450]: Removed session 11. May 8 00:48:34.493467 sshd[4866]: Accepted publickey for core from 10.0.0.1 port 44296 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:34.495481 sshd[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:34.504347 systemd-logind[1450]: New session 12 of user core. May 8 00:48:34.507685 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:48:34.696320 sshd[4866]: pam_unix(sshd:session): session closed for user core May 8 00:48:34.704277 systemd[1]: sshd@11-10.0.0.152:22-10.0.0.1:44296.service: Deactivated successfully. May 8 00:48:34.707498 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:48:34.708909 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. May 8 00:48:34.715786 systemd[1]: Started sshd@12-10.0.0.152:22-10.0.0.1:44300.service - OpenSSH per-connection server daemon (10.0.0.1:44300). May 8 00:48:34.717636 systemd-logind[1450]: Removed session 12. May 8 00:48:34.756505 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 44300 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:34.758351 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:34.764213 systemd-logind[1450]: New session 13 of user core. May 8 00:48:34.769727 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:48:35.139707 systemd-networkd[1395]: cali80e86734f90: Gained IPv6LL May 8 00:48:35.234874 sshd[4879]: pam_unix(sshd:session): session closed for user core May 8 00:48:35.238918 systemd[1]: sshd@12-10.0.0.152:22-10.0.0.1:44300.service: Deactivated successfully. May 8 00:48:35.242617 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:48:35.244348 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. May 8 00:48:35.245425 systemd-logind[1450]: Removed session 13. May 8 00:48:35.264032 containerd[1468]: time="2025-05-08T00:48:35.263992096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:35.266960 containerd[1468]: time="2025-05-08T00:48:35.266398040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:48:35.268137 containerd[1468]: time="2025-05-08T00:48:35.268088153Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:35.270158 containerd[1468]: time="2025-05-08T00:48:35.270095370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:35.270707 containerd[1468]: time="2025-05-08T00:48:35.270678645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.408114396s" May 8 00:48:35.270773 containerd[1468]: time="2025-05-08T00:48:35.270709512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:48:35.272264 containerd[1468]: time="2025-05-08T00:48:35.272232531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:48:35.273079 containerd[1468]: time="2025-05-08T00:48:35.273055917Z" level=info msg="CreateContainer within sandbox \"4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:48:35.287734 containerd[1468]: time="2025-05-08T00:48:35.287681195Z" level=info msg="CreateContainer within sandbox \"4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0457238d18bdcbacd4149aab6b3694b460719c29d3448a9cc4d39a01d8a3fd3f\"" May 8 00:48:35.288694 containerd[1468]: time="2025-05-08T00:48:35.288656656Z" level=info msg="StartContainer for \"0457238d18bdcbacd4149aab6b3694b460719c29d3448a9cc4d39a01d8a3fd3f\"" May 8 00:48:35.321766 systemd[1]: Started cri-containerd-0457238d18bdcbacd4149aab6b3694b460719c29d3448a9cc4d39a01d8a3fd3f.scope - libcontainer container 0457238d18bdcbacd4149aab6b3694b460719c29d3448a9cc4d39a01d8a3fd3f. May 8 00:48:35.367778 containerd[1468]: time="2025-05-08T00:48:35.367722210Z" level=info msg="StartContainer for \"0457238d18bdcbacd4149aab6b3694b460719c29d3448a9cc4d39a01d8a3fd3f\" returns successfully" May 8 00:48:35.430296 kubelet[2520]: E0508 00:48:35.429701 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:35.723702 containerd[1468]: time="2025-05-08T00:48:35.723563979Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:35.724976 containerd[1468]: time="2025-05-08T00:48:35.724356126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:48:35.726739 containerd[1468]: time="2025-05-08T00:48:35.726713330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 454.448929ms" May 8 00:48:35.726796 containerd[1468]: time="2025-05-08T00:48:35.726742235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:48:35.728383 containerd[1468]: time="2025-05-08T00:48:35.727786043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:48:35.728620 containerd[1468]: time="2025-05-08T00:48:35.728565256Z" level=info msg="CreateContainer within sandbox \"342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:48:35.743875 containerd[1468]: time="2025-05-08T00:48:35.743835215Z" level=info msg="CreateContainer within sandbox \"342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcd0da041a1fd3934a4cd7be831038141dd47b0a7f60a21fd07b049a13261bfc\"" May 8 00:48:35.744354 containerd[1468]: time="2025-05-08T00:48:35.744330876Z" level=info msg="StartContainer for \"dcd0da041a1fd3934a4cd7be831038141dd47b0a7f60a21fd07b049a13261bfc\"" May 8 00:48:35.785673 systemd[1]: Started cri-containerd-dcd0da041a1fd3934a4cd7be831038141dd47b0a7f60a21fd07b049a13261bfc.scope - libcontainer container dcd0da041a1fd3934a4cd7be831038141dd47b0a7f60a21fd07b049a13261bfc. May 8 00:48:35.841276 containerd[1468]: time="2025-05-08T00:48:35.841235219Z" level=info msg="StartContainer for \"dcd0da041a1fd3934a4cd7be831038141dd47b0a7f60a21fd07b049a13261bfc\" returns successfully" May 8 00:48:36.280548 kubelet[2520]: I0508 00:48:36.279875 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55bbbd787d-txq5g" podStartSLOduration=29.312928517 podStartE2EDuration="34.279853647s" podCreationTimestamp="2025-05-08 00:48:02 +0000 UTC" firstStartedPulling="2025-05-08 00:48:30.304926556 +0000 UTC m=+39.533029496" lastFinishedPulling="2025-05-08 00:48:35.271851686 +0000 UTC m=+44.499954626" observedRunningTime="2025-05-08 00:48:35.444619825 +0000 UTC m=+44.672722765" watchObservedRunningTime="2025-05-08 00:48:36.279853647 +0000 UTC m=+45.507956587" May 8 00:48:37.436221 kubelet[2520]: I0508 00:48:37.436161 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:48:38.385154 containerd[1468]: time="2025-05-08T00:48:38.385081705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:38.410322 containerd[1468]: time="2025-05-08T00:48:38.410262271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:48:38.437966 containerd[1468]: time="2025-05-08T00:48:38.437913281Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:38.451500 containerd[1468]: time="2025-05-08T00:48:38.451459724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:38.452108 containerd[1468]: time="2025-05-08T00:48:38.452075319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.724252517s" May 8 00:48:38.452141 containerd[1468]: time="2025-05-08T00:48:38.452104424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:48:38.453977 containerd[1468]: time="2025-05-08T00:48:38.453911935Z" level=info msg="CreateContainer within sandbox \"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:48:38.723458 containerd[1468]: time="2025-05-08T00:48:38.723396241Z" level=info msg="CreateContainer within sandbox \"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5e543bde24ae7057a83a05d1e016a8eab370e2fd3bed3d0d18d5de184ce5fca9\"" May 8 00:48:38.724085 containerd[1468]: time="2025-05-08T00:48:38.723944340Z" level=info msg="StartContainer for \"5e543bde24ae7057a83a05d1e016a8eab370e2fd3bed3d0d18d5de184ce5fca9\"" May 8 00:48:38.756700 systemd[1]: Started cri-containerd-5e543bde24ae7057a83a05d1e016a8eab370e2fd3bed3d0d18d5de184ce5fca9.scope - libcontainer container 5e543bde24ae7057a83a05d1e016a8eab370e2fd3bed3d0d18d5de184ce5fca9. May 8 00:48:38.991079 containerd[1468]: time="2025-05-08T00:48:38.990899229Z" level=info msg="StartContainer for \"5e543bde24ae7057a83a05d1e016a8eab370e2fd3bed3d0d18d5de184ce5fca9\" returns successfully" May 8 00:48:38.992217 containerd[1468]: time="2025-05-08T00:48:38.992179642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:48:40.026618 systemd[1]: Started sshd@13-10.0.0.152:22-10.0.0.1:36104.service - OpenSSH per-connection server daemon (10.0.0.1:36104). May 8 00:48:40.073080 sshd[5028]: Accepted publickey for core from 10.0.0.1 port 36104 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:40.074672 sshd[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:40.079280 systemd-logind[1450]: New session 14 of user core. May 8 00:48:40.087705 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:48:40.208589 sshd[5028]: pam_unix(sshd:session): session closed for user core May 8 00:48:40.212172 systemd[1]: sshd@13-10.0.0.152:22-10.0.0.1:36104.service: Deactivated successfully. May 8 00:48:40.214231 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:48:40.214928 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. May 8 00:48:40.215978 systemd-logind[1450]: Removed session 14. May 8 00:48:40.451978 containerd[1468]: time="2025-05-08T00:48:40.451924785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:40.452739 containerd[1468]: time="2025-05-08T00:48:40.452692977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:48:40.453838 containerd[1468]: time="2025-05-08T00:48:40.453809031Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:40.455846 containerd[1468]: time="2025-05-08T00:48:40.455818913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:48:40.456496 containerd[1468]: time="2025-05-08T00:48:40.456465887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.464256529s" May 8 00:48:40.456533 containerd[1468]: time="2025-05-08T00:48:40.456494320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:48:40.458542 containerd[1468]: time="2025-05-08T00:48:40.458501507Z" level=info msg="CreateContainer within sandbox \"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:48:40.479266 containerd[1468]: time="2025-05-08T00:48:40.479216329Z" level=info msg="CreateContainer within sandbox \"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4337ba16dff3c9563b4912eca4000be22a9f6ea99e42c2ac5859a41651bf962d\"" May 8 00:48:40.479760 containerd[1468]: time="2025-05-08T00:48:40.479710035Z" level=info msg="StartContainer for \"4337ba16dff3c9563b4912eca4000be22a9f6ea99e42c2ac5859a41651bf962d\"" May 8 00:48:40.539659 systemd[1]: Started cri-containerd-4337ba16dff3c9563b4912eca4000be22a9f6ea99e42c2ac5859a41651bf962d.scope - libcontainer container 4337ba16dff3c9563b4912eca4000be22a9f6ea99e42c2ac5859a41651bf962d. May 8 00:48:40.569812 containerd[1468]: time="2025-05-08T00:48:40.569710294Z" level=info msg="StartContainer for \"4337ba16dff3c9563b4912eca4000be22a9f6ea99e42c2ac5859a41651bf962d\" returns successfully" May 8 00:48:40.913263 kubelet[2520]: I0508 00:48:40.913216 2520 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:48:40.913263 kubelet[2520]: I0508 00:48:40.913255 2520 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:48:41.462481 kubelet[2520]: I0508 00:48:41.462051 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6svb9" podStartSLOduration=32.247889291 podStartE2EDuration="39.462034114s" podCreationTimestamp="2025-05-08 00:48:02 +0000 UTC" firstStartedPulling="2025-05-08 00:48:33.243025035 +0000 UTC m=+42.471127975" lastFinishedPulling="2025-05-08 00:48:40.457169858 +0000 UTC m=+49.685272798" observedRunningTime="2025-05-08 00:48:41.461655654 +0000 UTC m=+50.689758594" watchObservedRunningTime="2025-05-08 00:48:41.462034114 +0000 UTC m=+50.690137054" May 8 00:48:41.462481 kubelet[2520]: I0508 00:48:41.462210 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55bbbd787d-nnsrx" podStartSLOduration=35.121167583 podStartE2EDuration="39.462187942s" podCreationTimestamp="2025-05-08 00:48:02 +0000 UTC" firstStartedPulling="2025-05-08 00:48:31.386468597 +0000 UTC m=+40.614571537" lastFinishedPulling="2025-05-08 00:48:35.727488956 +0000 UTC m=+44.955591896" observedRunningTime="2025-05-08 00:48:36.443696984 +0000 UTC m=+45.671799924" watchObservedRunningTime="2025-05-08 00:48:41.462187942 +0000 UTC m=+50.690290882" May 8 00:48:45.220003 systemd[1]: Started sshd@14-10.0.0.152:22-10.0.0.1:36108.service - OpenSSH per-connection server daemon (10.0.0.1:36108). May 8 00:48:45.262720 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 36108 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:45.264236 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:45.268000 systemd-logind[1450]: New session 15 of user core. May 8 00:48:45.277644 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:48:45.391092 sshd[5090]: pam_unix(sshd:session): session closed for user core May 8 00:48:45.396011 systemd[1]: sshd@14-10.0.0.152:22-10.0.0.1:36108.service: Deactivated successfully. May 8 00:48:45.397954 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:48:45.398655 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. May 8 00:48:45.399606 systemd-logind[1450]: Removed session 15. May 8 00:48:48.776651 kubelet[2520]: I0508 00:48:48.776097 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:48:50.404404 systemd[1]: Started sshd@15-10.0.0.152:22-10.0.0.1:50916.service - OpenSSH per-connection server daemon (10.0.0.1:50916). May 8 00:48:50.442369 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 50916 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:50.443840 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:50.447673 systemd-logind[1450]: New session 16 of user core. May 8 00:48:50.460660 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:48:50.609965 sshd[5112]: pam_unix(sshd:session): session closed for user core May 8 00:48:50.614118 systemd[1]: sshd@15-10.0.0.152:22-10.0.0.1:50916.service: Deactivated successfully. May 8 00:48:50.615894 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:48:50.616428 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. May 8 00:48:50.617234 systemd-logind[1450]: Removed session 16. May 8 00:48:50.844495 containerd[1468]: time="2025-05-08T00:48:50.844373048Z" level=info msg="StopPodSandbox for \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\"" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.874 [WARNING][5140] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059", Pod:"coredns-6f6b679f8f-vxpm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c83f527eb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.874 [INFO][5140] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.875 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" iface="eth0" netns="" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.875 [INFO][5140] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.875 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.894 [INFO][5150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.895 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.895 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.900 [WARNING][5150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.900 [INFO][5150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.901 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:50.907254 containerd[1468]: 2025-05-08 00:48:50.903 [INFO][5140] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.907793 containerd[1468]: time="2025-05-08T00:48:50.907277992Z" level=info msg="TearDown network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\" successfully" May 8 00:48:50.907793 containerd[1468]: time="2025-05-08T00:48:50.907298261Z" level=info msg="StopPodSandbox for \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\" returns successfully" May 8 00:48:50.908290 containerd[1468]: time="2025-05-08T00:48:50.908271819Z" level=info msg="RemovePodSandbox for \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\"" May 8 00:48:50.910353 containerd[1468]: time="2025-05-08T00:48:50.910333597Z" level=info msg="Forcibly stopping sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\"" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.940 [WARNING][5172] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ccc040eb-4bcd-483b-a0e3-5bbd655bd91d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83bf053fd4d545536cb4ca3dc729fcf6415851c56a52c6ef315d1b8519d9f059", Pod:"coredns-6f6b679f8f-vxpm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c83f527eb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.940 [INFO][5172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.940 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" iface="eth0" netns="" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.940 [INFO][5172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.940 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.956 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.957 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.957 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.961 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.961 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" HandleID="k8s-pod-network.af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" Workload="localhost-k8s-coredns--6f6b679f8f--vxpm5-eth0" May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.962 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:50.966220 containerd[1468]: 2025-05-08 00:48:50.964 [INFO][5172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace" May 8 00:48:50.966663 containerd[1468]: time="2025-05-08T00:48:50.966268534Z" level=info msg="TearDown network for sandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\" successfully" May 8 00:48:50.993538 containerd[1468]: time="2025-05-08T00:48:50.993491533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:48:50.993596 containerd[1468]: time="2025-05-08T00:48:50.993568061Z" level=info msg="RemovePodSandbox \"af44e6635d32a6dec3fafaef84f5ceacfea5db24ff74a69bf2ec955d4590dace\" returns successfully" May 8 00:48:50.994121 containerd[1468]: time="2025-05-08T00:48:50.994097857Z" level=info msg="StopPodSandbox for \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\"" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.029 [WARNING][5203] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"714dc4ce-e252-47a3-96ad-e69d699c235e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5", Pod:"calico-apiserver-55bbbd787d-nnsrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7c1c57f152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.029 [INFO][5203] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.029 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" iface="eth0" netns="" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.029 [INFO][5203] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.029 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.046 [INFO][5211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.046 [INFO][5211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.046 [INFO][5211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.050 [WARNING][5211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.050 [INFO][5211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.051 [INFO][5211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.055687 containerd[1468]: 2025-05-08 00:48:51.053 [INFO][5203] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.056263 containerd[1468]: time="2025-05-08T00:48:51.056219514Z" level=info msg="TearDown network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\" successfully" May 8 00:48:51.056263 containerd[1468]: time="2025-05-08T00:48:51.056253720Z" level=info msg="StopPodSandbox for \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\" returns successfully" May 8 00:48:51.056767 containerd[1468]: time="2025-05-08T00:48:51.056742889Z" level=info msg="RemovePodSandbox for \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\"" May 8 00:48:51.056814 containerd[1468]: time="2025-05-08T00:48:51.056776292Z" level=info msg="Forcibly stopping sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\"" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.087 [WARNING][5233] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"714dc4ce-e252-47a3-96ad-e69d699c235e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"342b8a231e55d449edc2c671a3e7295eae9ce6a17fa9e7f4218b320a6aa975c5", Pod:"calico-apiserver-55bbbd787d-nnsrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7c1c57f152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.088 [INFO][5233] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.088 [INFO][5233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" iface="eth0" netns="" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.088 [INFO][5233] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.088 [INFO][5233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.106 [INFO][5241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.106 [INFO][5241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.106 [INFO][5241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.110 [WARNING][5241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.110 [INFO][5241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" HandleID="k8s-pod-network.b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" Workload="localhost-k8s-calico--apiserver--55bbbd787d--nnsrx-eth0" May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.111 [INFO][5241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.115838 containerd[1468]: 2025-05-08 00:48:51.113 [INFO][5233] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a" May 8 00:48:51.115838 containerd[1468]: time="2025-05-08T00:48:51.115806350Z" level=info msg="TearDown network for sandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\" successfully" May 8 00:48:51.136617 containerd[1468]: time="2025-05-08T00:48:51.136572533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:48:51.136693 containerd[1468]: time="2025-05-08T00:48:51.136636696Z" level=info msg="RemovePodSandbox \"b33cf36e4eb33cd6ba73a234530221f6ce933b946a9a8a164cc717346e82e71a\" returns successfully" May 8 00:48:51.137081 containerd[1468]: time="2025-05-08T00:48:51.137055690Z" level=info msg="StopPodSandbox for \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\"" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.178 [WARNING][5263] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"22faceee-0d0f-4896-b166-6798291089f0", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80", Pod:"calico-apiserver-55bbbd787d-txq5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53b19abc5f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.179 [INFO][5263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.179 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" iface="eth0" netns="" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.179 [INFO][5263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.179 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.196 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.197 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.197 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.201 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.201 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.202 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.206322 containerd[1468]: 2025-05-08 00:48:51.204 [INFO][5263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.206857 containerd[1468]: time="2025-05-08T00:48:51.206356018Z" level=info msg="TearDown network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\" successfully" May 8 00:48:51.206857 containerd[1468]: time="2025-05-08T00:48:51.206384182Z" level=info msg="StopPodSandbox for \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\" returns successfully" May 8 00:48:51.206857 containerd[1468]: time="2025-05-08T00:48:51.206820328Z" level=info msg="RemovePodSandbox for \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\"" May 8 00:48:51.206857 containerd[1468]: time="2025-05-08T00:48:51.206844715Z" level=info msg="Forcibly stopping sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\"" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.234 [WARNING][5293] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0", GenerateName:"calico-apiserver-55bbbd787d-", Namespace:"calico-apiserver", SelfLink:"", UID:"22faceee-0d0f-4896-b166-6798291089f0", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbbd787d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c8f9741b13c7a25008dd54a1c528f29ad2fb4d038a60e1edfb294617b0cfc80", Pod:"calico-apiserver-55bbbd787d-txq5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53b19abc5f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.235 [INFO][5293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.235 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" iface="eth0" netns="" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.235 [INFO][5293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.235 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.251 [INFO][5301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.251 [INFO][5301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.252 [INFO][5301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.256 [WARNING][5301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.256 [INFO][5301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" HandleID="k8s-pod-network.2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" Workload="localhost-k8s-calico--apiserver--55bbbd787d--txq5g-eth0" May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.257 [INFO][5301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.261022 containerd[1468]: 2025-05-08 00:48:51.258 [INFO][5293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805" May 8 00:48:51.261468 containerd[1468]: time="2025-05-08T00:48:51.261061531Z" level=info msg="TearDown network for sandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\" successfully" May 8 00:48:51.299634 containerd[1468]: time="2025-05-08T00:48:51.299594623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:48:51.299714 containerd[1468]: time="2025-05-08T00:48:51.299667462Z" level=info msg="RemovePodSandbox \"2e5316bde0d2b1be3a652a64903b921546073df116c6608b4b1cc5a2485b9805\" returns successfully" May 8 00:48:51.300125 containerd[1468]: time="2025-05-08T00:48:51.300081637Z" level=info msg="StopPodSandbox for \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\"" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.329 [WARNING][5324] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8", Pod:"coredns-6f6b679f8f-p2xjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc46683f00f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.329 [INFO][5324] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.329 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" iface="eth0" netns="" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.329 [INFO][5324] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.329 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.347 [INFO][5332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.347 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.347 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.351 [WARNING][5332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.351 [INFO][5332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.352 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.356732 containerd[1468]: 2025-05-08 00:48:51.354 [INFO][5324] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.357341 containerd[1468]: time="2025-05-08T00:48:51.356759713Z" level=info msg="TearDown network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\" successfully" May 8 00:48:51.357341 containerd[1468]: time="2025-05-08T00:48:51.356796845Z" level=info msg="StopPodSandbox for \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\" returns successfully" May 8 00:48:51.357341 containerd[1468]: time="2025-05-08T00:48:51.357298707Z" level=info msg="RemovePodSandbox for \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\"" May 8 00:48:51.357341 containerd[1468]: time="2025-05-08T00:48:51.357326420Z" level=info msg="Forcibly stopping sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\"" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.387 [WARNING][5354] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d5677d4f-cfc9-48a7-bd1a-0da37dd788a8", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b797b10feb00dfbd92d9918477547ad86ca24d9a267deb20503ded5f5e903e8", Pod:"coredns-6f6b679f8f-p2xjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc46683f00f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.387 [INFO][5354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.387 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" iface="eth0" netns="" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.387 [INFO][5354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.387 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.403 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.404 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.404 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.409 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.409 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" HandleID="k8s-pod-network.31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" Workload="localhost-k8s-coredns--6f6b679f8f--p2xjw-eth0" May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.410 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.414383 containerd[1468]: 2025-05-08 00:48:51.412 [INFO][5354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b" May 8 00:48:51.414810 containerd[1468]: time="2025-05-08T00:48:51.414420776Z" level=info msg="TearDown network for sandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\" successfully" May 8 00:48:51.437210 containerd[1468]: time="2025-05-08T00:48:51.437169851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:48:51.437276 containerd[1468]: time="2025-05-08T00:48:51.437248363Z" level=info msg="RemovePodSandbox \"31af868bf2d474605a7a964cfc535f08f867318dab72a43bc7d3b37f07dfa00b\" returns successfully" May 8 00:48:51.437843 containerd[1468]: time="2025-05-08T00:48:51.437655984Z" level=info msg="StopPodSandbox for \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\"" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.468 [WARNING][5384] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6svb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1a02ecd-8139-4fc8-add6-59265c14dd8e", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1", Pod:"csi-node-driver-6svb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80e86734f90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.468 [INFO][5384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.468 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" iface="eth0" netns="" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.468 [INFO][5384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.468 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.486 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.486 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.486 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.491 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.491 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.492 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.496333 containerd[1468]: 2025-05-08 00:48:51.494 [INFO][5384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.496743 containerd[1468]: time="2025-05-08T00:48:51.496374765Z" level=info msg="TearDown network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\" successfully" May 8 00:48:51.496743 containerd[1468]: time="2025-05-08T00:48:51.496400885Z" level=info msg="StopPodSandbox for \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\" returns successfully" May 8 00:48:51.496867 containerd[1468]: time="2025-05-08T00:48:51.496847742Z" level=info msg="RemovePodSandbox for \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\"" May 8 00:48:51.496898 containerd[1468]: time="2025-05-08T00:48:51.496875616Z" level=info msg="Forcibly stopping sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\"" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.526 [WARNING][5414] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6svb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1a02ecd-8139-4fc8-add6-59265c14dd8e", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"492ec094a165bf40a7b18879dfa53ecfc01d5f88ad6270efbac0caa87063f5f1", Pod:"csi-node-driver-6svb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80e86734f90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.526 [INFO][5414] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.526 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" iface="eth0" netns="" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.526 [INFO][5414] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.526 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.543 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.543 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.543 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.547 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.547 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" HandleID="k8s-pod-network.2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" Workload="localhost-k8s-csi--node--driver--6svb9-eth0" May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.548 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.552832 containerd[1468]: 2025-05-08 00:48:51.550 [INFO][5414] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf" May 8 00:48:51.557571 containerd[1468]: time="2025-05-08T00:48:51.552872767Z" level=info msg="TearDown network for sandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\" successfully" May 8 00:48:51.576116 containerd[1468]: time="2025-05-08T00:48:51.576070724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:48:51.576116 containerd[1468]: time="2025-05-08T00:48:51.576126190Z" level=info msg="RemovePodSandbox \"2e6a5c4527f045c758d63ee02bc60dad6bb1d51c65e86571948199e80827fdbf\" returns successfully" May 8 00:48:51.576723 containerd[1468]: time="2025-05-08T00:48:51.576674312Z" level=info msg="StopPodSandbox for \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\"" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.629 [WARNING][5444] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0", GenerateName:"calico-kube-controllers-6b9f64494b-", Namespace:"calico-system", SelfLink:"", UID:"8a7556ed-ec9c-47f1-a41d-af868f71d780", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9f64494b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508", Pod:"calico-kube-controllers-6b9f64494b-ffr58", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3219577cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.629 [INFO][5444] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.629 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" iface="eth0" netns="" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.629 [INFO][5444] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.629 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.648 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.649 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.649 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.653 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.653 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.654 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.659424 containerd[1468]: 2025-05-08 00:48:51.656 [INFO][5444] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.659935 containerd[1468]: time="2025-05-08T00:48:51.659467120Z" level=info msg="TearDown network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\" successfully" May 8 00:48:51.659935 containerd[1468]: time="2025-05-08T00:48:51.659494622Z" level=info msg="StopPodSandbox for \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\" returns successfully" May 8 00:48:51.660187 containerd[1468]: time="2025-05-08T00:48:51.660150080Z" level=info msg="RemovePodSandbox for \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\"" May 8 00:48:51.660249 containerd[1468]: time="2025-05-08T00:48:51.660203503Z" level=info msg="Forcibly stopping sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\"" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.691 [WARNING][5474] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0", GenerateName:"calico-kube-controllers-6b9f64494b-", Namespace:"calico-system", SelfLink:"", UID:"8a7556ed-ec9c-47f1-a41d-af868f71d780", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9f64494b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cbc31196e2137d9b5d139eb02df8fee8aa4587205eb1caeea612d4cc3b09508", Pod:"calico-kube-controllers-6b9f64494b-ffr58", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3219577cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.691 [INFO][5474] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.691 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" iface="eth0" netns="" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.691 [INFO][5474] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.691 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.710 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.710 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.710 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.715 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.715 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" HandleID="k8s-pod-network.f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" Workload="localhost-k8s-calico--kube--controllers--6b9f64494b--ffr58-eth0" May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.716 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:48:51.720546 containerd[1468]: 2025-05-08 00:48:51.718 [INFO][5474] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765" May 8 00:48:51.720546 containerd[1468]: time="2025-05-08T00:48:51.720486193Z" level=info msg="TearDown network for sandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\" successfully" May 8 00:48:51.763011 containerd[1468]: time="2025-05-08T00:48:51.762963882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:48:51.763075 containerd[1468]: time="2025-05-08T00:48:51.763021031Z" level=info msg="RemovePodSandbox \"f20332ebae7c4ff648e35f1e2765f7b45669729030da2a83c0cb33a58048b765\" returns successfully" May 8 00:48:55.621427 systemd[1]: Started sshd@16-10.0.0.152:22-10.0.0.1:50918.service - OpenSSH per-connection server daemon (10.0.0.1:50918). May 8 00:48:55.665758 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 50918 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:48:55.667471 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:48:55.671471 systemd-logind[1450]: New session 17 of user core. May 8 00:48:55.679661 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:48:55.794108 sshd[5493]: pam_unix(sshd:session): session closed for user core May 8 00:48:55.799106 systemd[1]: sshd@16-10.0.0.152:22-10.0.0.1:50918.service: Deactivated successfully. May 8 00:48:55.801044 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:48:55.801651 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. May 8 00:48:55.802478 systemd-logind[1450]: Removed session 17. May 8 00:48:56.408972 kubelet[2520]: E0508 00:48:56.408900 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:49:00.811825 systemd[1]: Started sshd@17-10.0.0.152:22-10.0.0.1:51486.service - OpenSSH per-connection server daemon (10.0.0.1:51486). May 8 00:49:00.862223 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 51486 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:00.863985 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:00.868441 systemd-logind[1450]: New session 18 of user core. May 8 00:49:00.881658 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:49:01.008167 sshd[5572]: pam_unix(sshd:session): session closed for user core May 8 00:49:01.014941 systemd[1]: sshd@17-10.0.0.152:22-10.0.0.1:51486.service: Deactivated successfully. May 8 00:49:01.016842 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:49:01.017471 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. May 8 00:49:01.018214 systemd-logind[1450]: Removed session 18. May 8 00:49:06.022354 systemd[1]: Started sshd@18-10.0.0.152:22-10.0.0.1:51488.service - OpenSSH per-connection server daemon (10.0.0.1:51488). May 8 00:49:06.061200 sshd[5588]: Accepted publickey for core from 10.0.0.1 port 51488 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:06.062975 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:06.066915 systemd-logind[1450]: New session 19 of user core. May 8 00:49:06.077649 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:49:06.181268 sshd[5588]: pam_unix(sshd:session): session closed for user core May 8 00:49:06.189145 systemd[1]: sshd@18-10.0.0.152:22-10.0.0.1:51488.service: Deactivated successfully. May 8 00:49:06.190799 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:49:06.192283 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. May 8 00:49:06.193492 systemd[1]: Started sshd@19-10.0.0.152:22-10.0.0.1:51494.service - OpenSSH per-connection server daemon (10.0.0.1:51494). May 8 00:49:06.194447 systemd-logind[1450]: Removed session 19. May 8 00:49:06.238736 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 51494 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:06.240314 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:06.243894 systemd-logind[1450]: New session 20 of user core. May 8 00:49:06.250680 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:49:06.514931 sshd[5602]: pam_unix(sshd:session): session closed for user core May 8 00:49:06.526418 systemd[1]: sshd@19-10.0.0.152:22-10.0.0.1:51494.service: Deactivated successfully. May 8 00:49:06.528092 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:49:06.529587 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. May 8 00:49:06.535764 systemd[1]: Started sshd@20-10.0.0.152:22-10.0.0.1:51502.service - OpenSSH per-connection server daemon (10.0.0.1:51502). May 8 00:49:06.537139 systemd-logind[1450]: Removed session 20. May 8 00:49:06.572533 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 51502 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:06.574283 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:06.578707 systemd-logind[1450]: New session 21 of user core. May 8 00:49:06.590664 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:49:08.079697 sshd[5614]: pam_unix(sshd:session): session closed for user core May 8 00:49:08.090274 systemd[1]: sshd@20-10.0.0.152:22-10.0.0.1:51502.service: Deactivated successfully. May 8 00:49:08.092870 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:49:08.095960 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. May 8 00:49:08.101989 systemd[1]: Started sshd@21-10.0.0.152:22-10.0.0.1:38298.service - OpenSSH per-connection server daemon (10.0.0.1:38298). May 8 00:49:08.103352 systemd-logind[1450]: Removed session 21. May 8 00:49:08.155238 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 38298 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:08.157019 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:08.162063 systemd-logind[1450]: New session 22 of user core. May 8 00:49:08.168701 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:49:08.411704 sshd[5646]: pam_unix(sshd:session): session closed for user core May 8 00:49:08.421599 systemd[1]: sshd@21-10.0.0.152:22-10.0.0.1:38298.service: Deactivated successfully. May 8 00:49:08.423437 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:49:08.425052 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. May 8 00:49:08.437816 systemd[1]: Started sshd@22-10.0.0.152:22-10.0.0.1:38300.service - OpenSSH per-connection server daemon (10.0.0.1:38300). May 8 00:49:08.439030 systemd-logind[1450]: Removed session 22. May 8 00:49:08.474069 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 38300 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:08.475888 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:08.480830 systemd-logind[1450]: New session 23 of user core. May 8 00:49:08.490697 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:49:08.601309 sshd[5658]: pam_unix(sshd:session): session closed for user core May 8 00:49:08.605312 systemd[1]: sshd@22-10.0.0.152:22-10.0.0.1:38300.service: Deactivated successfully. May 8 00:49:08.607495 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:49:08.608167 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. May 8 00:49:08.609020 systemd-logind[1450]: Removed session 23. May 8 00:49:13.613331 systemd[1]: Started sshd@23-10.0.0.152:22-10.0.0.1:38306.service - OpenSSH per-connection server daemon (10.0.0.1:38306). May 8 00:49:13.651303 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 38306 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:13.652807 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:13.656348 systemd-logind[1450]: New session 24 of user core. May 8 00:49:13.665637 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:49:13.769918 sshd[5677]: pam_unix(sshd:session): session closed for user core May 8 00:49:13.773319 systemd[1]: sshd@23-10.0.0.152:22-10.0.0.1:38306.service: Deactivated successfully. May 8 00:49:13.775055 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:49:13.775682 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. May 8 00:49:13.776539 systemd-logind[1450]: Removed session 24. May 8 00:49:14.847146 kubelet[2520]: E0508 00:49:14.847115 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:49:15.846716 kubelet[2520]: E0508 00:49:15.846690 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:49:18.781239 systemd[1]: Started sshd@24-10.0.0.152:22-10.0.0.1:44226.service - OpenSSH per-connection server daemon (10.0.0.1:44226). May 8 00:49:18.819673 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 44226 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:18.821169 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:18.825168 systemd-logind[1450]: New session 25 of user core. May 8 00:49:18.831665 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:49:18.937863 sshd[5693]: pam_unix(sshd:session): session closed for user core May 8 00:49:18.942017 systemd[1]: sshd@24-10.0.0.152:22-10.0.0.1:44226.service: Deactivated successfully. May 8 00:49:18.943799 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:49:18.944513 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. May 8 00:49:18.945549 systemd-logind[1450]: Removed session 25. May 8 00:49:19.847217 kubelet[2520]: E0508 00:49:19.847177 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:49:23.953229 systemd[1]: Started sshd@25-10.0.0.152:22-10.0.0.1:44234.service - OpenSSH per-connection server daemon (10.0.0.1:44234). May 8 00:49:23.996079 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 44234 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:23.997620 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:24.001660 systemd-logind[1450]: New session 26 of user core. May 8 00:49:24.009670 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:49:24.113013 sshd[5707]: pam_unix(sshd:session): session closed for user core May 8 00:49:24.116466 systemd[1]: sshd@25-10.0.0.152:22-10.0.0.1:44234.service: Deactivated successfully. May 8 00:49:24.118439 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:49:24.119029 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. May 8 00:49:24.119840 systemd-logind[1450]: Removed session 26. May 8 00:49:24.847803 kubelet[2520]: E0508 00:49:24.847279 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:49:29.126515 systemd[1]: Started sshd@26-10.0.0.152:22-10.0.0.1:43974.service - OpenSSH per-connection server daemon (10.0.0.1:43974). May 8 00:49:29.232007 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 43974 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:49:29.233861 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:49:29.238728 systemd-logind[1450]: New session 27 of user core. May 8 00:49:29.244651 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:49:29.396552 sshd[5765]: pam_unix(sshd:session): session closed for user core May 8 00:49:29.400368 systemd[1]: sshd@26-10.0.0.152:22-10.0.0.1:43974.service: Deactivated successfully. May 8 00:49:29.402290 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:49:29.403041 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. May 8 00:49:29.404009 systemd-logind[1450]: Removed session 27.