May 14 23:52:17.998570 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:19:37 -00 2025 May 14 23:52:17.998593 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 14 23:52:17.998605 kernel: BIOS-provided physical RAM map: May 14 23:52:17.998612 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 23:52:17.998618 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 23:52:17.998624 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 23:52:17.998632 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 14 23:52:17.998639 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 14 23:52:17.998645 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 23:52:17.998654 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 23:52:17.998661 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 23:52:17.998667 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 23:52:17.998673 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 23:52:17.998680 kernel: NX (Execute Disable) protection: active May 14 23:52:17.998688 kernel: APIC: Static calls initialized May 14 23:52:17.998698 kernel: SMBIOS 2.8 present. May 14 23:52:17.998705 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 14 23:52:17.998712 kernel: Hypervisor detected: KVM May 14 23:52:17.998719 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 23:52:17.998726 kernel: kvm-clock: using sched offset of 2336294290 cycles May 14 23:52:17.998733 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 23:52:17.998740 kernel: tsc: Detected 2794.748 MHz processor May 14 23:52:17.998748 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 23:52:17.998755 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 23:52:17.998764 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 14 23:52:17.998775 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 23:52:17.998782 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 23:52:17.998789 kernel: Using GB pages for direct mapping May 14 23:52:17.998796 kernel: ACPI: Early table checksum verification disabled May 14 23:52:17.998804 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 14 23:52:17.998811 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:17.998818 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:17.998825 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:17.998832 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 14 23:52:17.998853 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:17.998862 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:17.998871 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:17.998879 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:17.998887 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 14 23:52:17.998894 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 14 23:52:17.998905 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 14 23:52:17.998914 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 14 23:52:17.998922 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 14 23:52:17.998929 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 14 23:52:17.998937 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 14 23:52:17.998944 kernel: No NUMA configuration found May 14 23:52:17.998951 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 14 23:52:17.998959 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 14 23:52:17.998968 kernel: Zone ranges: May 14 23:52:17.998976 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 23:52:17.998983 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 14 23:52:17.998991 kernel: Normal empty May 14 23:52:17.998998 kernel: Movable zone start for each node May 14 23:52:17.999005 kernel: Early memory node ranges May 14 23:52:17.999013 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 23:52:17.999020 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 14 23:52:17.999027 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 14 23:52:17.999037 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 23:52:17.999048 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 23:52:17.999056 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 14 23:52:17.999063 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 23:52:17.999070 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 23:52:17.999078 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 23:52:17.999085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 23:52:17.999092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 23:52:17.999100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 23:52:17.999110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 23:52:17.999117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 23:52:17.999124 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 23:52:17.999132 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 23:52:17.999139 kernel: TSC deadline timer available May 14 23:52:17.999146 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 14 23:52:17.999154 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 23:52:17.999161 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 23:52:17.999168 kernel: kvm-guest: setup PV sched yield May 14 23:52:17.999176 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 23:52:17.999185 kernel: Booting paravirtualized kernel on KVM May 14 23:52:17.999193 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 23:52:17.999201 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 23:52:17.999209 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 14 23:52:17.999216 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 14 23:52:17.999223 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 23:52:17.999230 kernel: kvm-guest: PV spinlocks enabled May 14 23:52:17.999238 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 23:52:17.999248 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 14 23:52:17.999260 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:52:17.999269 kernel: random: crng init done May 14 23:52:17.999279 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:52:17.999288 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:52:17.999296 kernel: Fallback order for Node 0: 0 May 14 23:52:17.999304 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 14 23:52:17.999311 kernel: Policy zone: DMA32 May 14 23:52:17.999318 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:52:17.999329 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 138948K reserved, 0K cma-reserved) May 14 23:52:17.999336 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 23:52:17.999344 kernel: ftrace: allocating 37918 entries in 149 pages May 14 23:52:17.999351 kernel: ftrace: allocated 149 pages with 4 groups May 14 23:52:17.999358 kernel: Dynamic Preempt: voluntary May 14 23:52:17.999366 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:52:17.999378 kernel: rcu: RCU event tracing is enabled. May 14 23:52:17.999386 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 23:52:17.999394 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:52:17.999403 kernel: Rude variant of Tasks RCU enabled. May 14 23:52:17.999411 kernel: Tracing variant of Tasks RCU enabled. May 14 23:52:17.999431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:52:17.999438 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 23:52:17.999446 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 23:52:17.999453 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:52:17.999460 kernel: Console: colour VGA+ 80x25 May 14 23:52:17.999468 kernel: printk: console [ttyS0] enabled May 14 23:52:17.999475 kernel: ACPI: Core revision 20230628 May 14 23:52:17.999485 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 23:52:17.999493 kernel: APIC: Switch to symmetric I/O mode setup May 14 23:52:17.999500 kernel: x2apic enabled May 14 23:52:17.999507 kernel: APIC: Switched APIC routing to: physical x2apic May 14 23:52:17.999515 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 23:52:17.999522 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 23:52:17.999530 kernel: kvm-guest: setup PV IPIs May 14 23:52:17.999547 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 23:52:17.999555 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 23:52:17.999563 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 14 23:52:17.999570 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 23:52:17.999578 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 23:52:17.999588 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 23:52:17.999596 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 23:52:17.999603 kernel: Spectre V2 : Mitigation: Retpolines May 14 23:52:17.999611 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 23:52:17.999621 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 23:52:17.999629 kernel: RETBleed: Mitigation: untrained return thunk May 14 23:52:17.999637 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 23:52:17.999644 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 23:52:17.999652 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 23:52:17.999660 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 23:52:17.999668 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 23:52:17.999676 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 23:52:17.999684 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 23:52:17.999694 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 23:52:17.999701 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 23:52:17.999709 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 23:52:17.999717 kernel: Freeing SMP alternatives memory: 32K May 14 23:52:17.999725 kernel: pid_max: default: 32768 minimum: 301 May 14 23:52:17.999735 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:52:17.999743 kernel: landlock: Up and running. May 14 23:52:17.999750 kernel: SELinux: Initializing. May 14 23:52:17.999758 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:52:17.999768 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:52:17.999776 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 23:52:17.999784 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:52:17.999792 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:52:17.999800 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:52:17.999807 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 23:52:17.999815 kernel: ... version: 0 May 14 23:52:17.999827 kernel: ... bit width: 48 May 14 23:52:17.999858 kernel: ... generic registers: 6 May 14 23:52:17.999869 kernel: ... value mask: 0000ffffffffffff May 14 23:52:17.999884 kernel: ... max period: 00007fffffffffff May 14 23:52:17.999898 kernel: ... fixed-purpose events: 0 May 14 23:52:17.999912 kernel: ... event mask: 000000000000003f May 14 23:52:17.999922 kernel: signal: max sigframe size: 1776 May 14 23:52:17.999936 kernel: rcu: Hierarchical SRCU implementation. May 14 23:52:17.999950 kernel: rcu: Max phase no-delay instances is 400. May 14 23:52:17.999964 kernel: smp: Bringing up secondary CPUs ... May 14 23:52:17.999978 kernel: smpboot: x86: Booting SMP configuration: May 14 23:52:17.999988 kernel: .... node #0, CPUs: #1 #2 #3 May 14 23:52:17.999996 kernel: smp: Brought up 1 node, 4 CPUs May 14 23:52:18.000003 kernel: smpboot: Max logical packages: 1 May 14 23:52:18.000011 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 14 23:52:18.000020 kernel: devtmpfs: initialized May 14 23:52:18.000029 kernel: x86/mm: Memory block size: 128MB May 14 23:52:18.000037 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:52:18.000044 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 23:52:18.000052 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:52:18.000062 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:52:18.000070 kernel: audit: initializing netlink subsys (disabled) May 14 23:52:18.000078 kernel: audit: type=2000 audit(1747266736.878:1): state=initialized audit_enabled=0 res=1 May 14 23:52:18.000085 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:52:18.000093 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 23:52:18.000101 kernel: cpuidle: using governor menu May 14 23:52:18.000109 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:52:18.000116 kernel: dca service started, version 1.12.1 May 14 23:52:18.000124 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 14 23:52:18.000134 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 14 23:52:18.000142 kernel: PCI: Using configuration type 1 for base access May 14 23:52:18.000150 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 23:52:18.000157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:52:18.000165 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:52:18.000173 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:52:18.000181 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:52:18.000188 kernel: ACPI: Added _OSI(Module Device) May 14 23:52:18.000196 kernel: ACPI: Added _OSI(Processor Device) May 14 23:52:18.000206 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:52:18.000214 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:52:18.000221 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:52:18.000229 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 23:52:18.000237 kernel: ACPI: Interpreter enabled May 14 23:52:18.000244 kernel: ACPI: PM: (supports S0 S3 S5) May 14 23:52:18.000252 kernel: ACPI: Using IOAPIC for interrupt routing May 14 23:52:18.000260 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 23:52:18.000267 kernel: PCI: Using E820 reservations for host bridge windows May 14 23:52:18.000277 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 23:52:18.000285 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:52:18.000539 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:52:18.000674 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 23:52:18.000798 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 23:52:18.000808 kernel: PCI host bridge to bus 0000:00 May 14 23:52:18.000945 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 23:52:18.001079 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 23:52:18.001204 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 23:52:18.001365 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 14 23:52:18.001517 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 23:52:18.001634 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 14 23:52:18.001747 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:52:18.001908 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 23:52:18.002044 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 14 23:52:18.002169 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 14 23:52:18.002292 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 14 23:52:18.002415 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 14 23:52:18.002555 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 23:52:18.002740 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 14 23:52:18.002890 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 14 23:52:18.003014 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 14 23:52:18.003137 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 14 23:52:18.003271 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 14 23:52:18.003398 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 14 23:52:18.003569 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 14 23:52:18.003716 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 14 23:52:18.003869 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 14 23:52:18.003998 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 14 23:52:18.004122 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 14 23:52:18.004247 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 14 23:52:18.004371 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 14 23:52:18.004527 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 23:52:18.004653 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 23:52:18.004791 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 23:52:18.004928 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 14 23:52:18.005055 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 14 23:52:18.005187 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 23:52:18.005337 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 14 23:52:18.005354 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 23:52:18.005371 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 23:52:18.005381 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 23:52:18.005392 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 23:52:18.005404 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 23:52:18.005441 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 23:52:18.005453 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 23:52:18.005464 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 23:52:18.005476 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 23:52:18.005488 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 23:52:18.005504 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 23:52:18.005516 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 23:52:18.005527 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 23:52:18.005539 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 23:52:18.005550 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 23:52:18.005562 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 23:52:18.005574 kernel: iommu: Default domain type: Translated May 14 23:52:18.005585 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 23:52:18.005596 kernel: PCI: Using ACPI for IRQ routing May 14 23:52:18.005611 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 23:52:18.005623 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 23:52:18.005635 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 14 23:52:18.005791 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 23:52:18.005958 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 23:52:18.006113 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 23:52:18.006128 kernel: vgaarb: loaded May 14 23:52:18.006138 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 23:52:18.006149 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 23:52:18.006165 kernel: clocksource: Switched to clocksource kvm-clock May 14 23:52:18.006177 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:52:18.006189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:52:18.006200 kernel: pnp: PnP ACPI init May 14 23:52:18.006375 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 23:52:18.006391 kernel: pnp: PnP ACPI: found 6 devices May 14 23:52:18.006404 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 23:52:18.006430 kernel: NET: Registered PF_INET protocol family May 14 23:52:18.006446 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:52:18.006457 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:52:18.006467 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:52:18.006478 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:52:18.006489 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:52:18.006500 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:52:18.006512 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:52:18.006524 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:52:18.006536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:52:18.006552 kernel: NET: Registered PF_XDP protocol family May 14 23:52:18.006697 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 23:52:18.006840 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 23:52:18.006995 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 23:52:18.007139 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 14 23:52:18.007282 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 23:52:18.007577 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 14 23:52:18.007596 kernel: PCI: CLS 0 bytes, default 64 May 14 23:52:18.007614 kernel: Initialise system trusted keyrings May 14 23:52:18.007625 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:52:18.007636 kernel: Key type asymmetric registered May 14 23:52:18.007647 kernel: Asymmetric key parser 'x509' registered May 14 23:52:18.007659 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 23:52:18.007671 kernel: io scheduler mq-deadline registered May 14 23:52:18.007683 kernel: io scheduler kyber registered May 14 23:52:18.007694 kernel: io scheduler bfq registered May 14 23:52:18.007704 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 23:52:18.007720 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 23:52:18.007731 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 23:52:18.007742 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 23:52:18.007754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:52:18.007766 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 23:52:18.007777 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 23:52:18.007789 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 23:52:18.007801 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 23:52:18.007982 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 23:52:18.008135 kernel: rtc_cmos 00:04: registered as rtc0 May 14 23:52:18.008151 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 14 23:52:18.008292 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T23:52:17 UTC (1747266737) May 14 23:52:18.008451 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 23:52:18.008467 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 23:52:18.008478 kernel: NET: Registered PF_INET6 protocol family May 14 23:52:18.008488 kernel: Segment Routing with IPv6 May 14 23:52:18.008497 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:52:18.008510 kernel: NET: Registered PF_PACKET protocol family May 14 23:52:18.008521 kernel: Key type dns_resolver registered May 14 23:52:18.008533 kernel: IPI shorthand broadcast: enabled May 14 23:52:18.008544 kernel: sched_clock: Marking stable (711005380, 109495773)->(853256987, -32755834) May 14 23:52:18.008556 kernel: registered taskstats version 1 May 14 23:52:18.008568 kernel: Loading compiled-in X.509 certificates May 14 23:52:18.008579 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: e21d6dc0691a7e1e8bef90d9217bc8c09d6860f3' May 14 23:52:18.008589 kernel: Key type .fscrypt registered May 14 23:52:18.008600 kernel: Key type fscrypt-provisioning registered May 14 23:52:18.008615 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:52:18.008627 kernel: ima: Allocated hash algorithm: sha1 May 14 23:52:18.008639 kernel: ima: No architecture policies found May 14 23:52:18.008651 kernel: clk: Disabling unused clocks May 14 23:52:18.008662 kernel: Freeing unused kernel image (initmem) memory: 43484K May 14 23:52:18.008672 kernel: Write protecting the kernel read-only data: 38912k May 14 23:52:18.008681 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 14 23:52:18.008692 kernel: Run /init as init process May 14 23:52:18.008704 kernel: with arguments: May 14 23:52:18.008719 kernel: /init May 14 23:52:18.008731 kernel: with environment: May 14 23:52:18.008742 kernel: HOME=/ May 14 23:52:18.008752 kernel: TERM=linux May 14 23:52:18.008761 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:52:18.008771 systemd[1]: Successfully made /usr/ read-only. May 14 23:52:18.008786 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:52:18.008800 systemd[1]: Detected virtualization kvm. May 14 23:52:18.008816 systemd[1]: Detected architecture x86-64. May 14 23:52:18.008828 systemd[1]: Running in initrd. May 14 23:52:18.008840 systemd[1]: No hostname configured, using default hostname. May 14 23:52:18.008858 systemd[1]: Hostname set to . May 14 23:52:18.008867 systemd[1]: Initializing machine ID from VM UUID. May 14 23:52:18.008878 systemd[1]: Queued start job for default target initrd.target. May 14 23:52:18.008891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:52:18.008904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:52:18.008921 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:52:18.008948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:52:18.008963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:52:18.008977 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:52:18.008995 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:52:18.009008 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:52:18.009021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:52:18.009033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:52:18.009044 systemd[1]: Reached target paths.target - Path Units. May 14 23:52:18.009056 systemd[1]: Reached target slices.target - Slice Units. May 14 23:52:18.009068 systemd[1]: Reached target swap.target - Swaps. May 14 23:52:18.009084 systemd[1]: Reached target timers.target - Timer Units. May 14 23:52:18.009098 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:52:18.009113 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:52:18.009125 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:52:18.009138 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:52:18.009151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:52:18.009164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:52:18.009177 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:52:18.009189 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:52:18.009202 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:52:18.009219 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:52:18.009231 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:52:18.009244 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:52:18.009256 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:52:18.009269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:52:18.009282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:18.009296 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:52:18.009309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:52:18.009324 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:52:18.009367 systemd-journald[194]: Collecting audit messages is disabled. May 14 23:52:18.009400 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:52:18.009478 systemd-journald[194]: Journal started May 14 23:52:18.009512 systemd-journald[194]: Runtime Journal (/run/log/journal/24bce47bd9384261826b9f9cb9c9dc2e) is 6M, max 48.4M, 42.3M free. May 14 23:52:18.001478 systemd-modules-load[195]: Inserted module 'overlay' May 14 23:52:18.043960 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:52:18.043987 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:52:18.044000 kernel: Bridge firewalling registered May 14 23:52:18.029117 systemd-modules-load[195]: Inserted module 'br_netfilter' May 14 23:52:18.056859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:52:18.059293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:18.063025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:52:18.074636 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:52:18.079831 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:52:18.082678 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:52:18.085598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:52:18.095563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:18.099050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:52:18.102725 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:18.112640 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:52:18.113364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:52:18.118431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:52:18.126162 dracut-cmdline[229]: dracut-dracut-053 May 14 23:52:18.129935 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 14 23:52:18.165908 systemd-resolved[236]: Positive Trust Anchors: May 14 23:52:18.165930 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:52:18.165963 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:52:18.169293 systemd-resolved[236]: Defaulting to hostname 'linux'. May 14 23:52:18.170729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:52:18.178012 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:52:18.225464 kernel: SCSI subsystem initialized May 14 23:52:18.235466 kernel: Loading iSCSI transport class v2.0-870. May 14 23:52:18.247468 kernel: iscsi: registered transport (tcp) May 14 23:52:18.268455 kernel: iscsi: registered transport (qla4xxx) May 14 23:52:18.268526 kernel: QLogic iSCSI HBA Driver May 14 23:52:18.320833 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:52:18.340701 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:52:18.369591 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:52:18.369665 kernel: device-mapper: uevent: version 1.0.3 May 14 23:52:18.370868 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:52:18.417463 kernel: raid6: avx2x4 gen() 29477 MB/s May 14 23:52:18.436448 kernel: raid6: avx2x2 gen() 30547 MB/s May 14 23:52:18.453543 kernel: raid6: avx2x1 gen() 25713 MB/s May 14 23:52:18.453571 kernel: raid6: using algorithm avx2x2 gen() 30547 MB/s May 14 23:52:18.471554 kernel: raid6: .... xor() 19875 MB/s, rmw enabled May 14 23:52:18.471578 kernel: raid6: using avx2x2 recovery algorithm May 14 23:52:18.502454 kernel: xor: automatically using best checksumming function avx May 14 23:52:18.660476 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:52:18.674364 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:52:18.693688 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:52:18.710610 systemd-udevd[416]: Using default interface naming scheme 'v255'. May 14 23:52:18.717121 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:52:18.727569 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:52:18.742756 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 14 23:52:18.774868 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:52:18.802549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:52:18.868733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:52:18.879646 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:52:18.892253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:52:18.894769 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:52:18.897255 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:52:18.899648 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:52:18.904470 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 23:52:18.910000 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 23:52:18.908619 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:52:18.918499 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:52:18.918537 kernel: GPT:9289727 != 19775487 May 14 23:52:18.918552 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:52:18.918566 kernel: GPT:9289727 != 19775487 May 14 23:52:18.918579 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:52:18.918594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:52:18.924526 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:52:18.927668 kernel: cryptd: max_cpu_qlen set to 1000 May 14 23:52:18.948299 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:52:18.960444 kernel: AVX2 version of gcm_enc/dec engaged. May 14 23:52:18.960473 kernel: AES CTR mode by8 optimization enabled May 14 23:52:18.960488 kernel: libata version 3.00 loaded. May 14 23:52:18.948484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:18.964195 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:52:18.967234 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:52:18.969083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:18.974854 kernel: ahci 0000:00:1f.2: version 3.0 May 14 23:52:18.975101 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 23:52:18.975118 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 23:52:18.974759 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:18.980701 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 23:52:18.980940 kernel: scsi host0: ahci May 14 23:52:18.985275 kernel: scsi host1: ahci May 14 23:52:18.985546 kernel: BTRFS: device fsid 11358d57-dfa4-4197-9524-595753ed5512 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (473) May 14 23:52:19.030451 kernel: scsi host2: ahci May 14 23:52:19.031846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:19.040575 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) May 14 23:52:19.040604 kernel: scsi host3: ahci May 14 23:52:19.042506 kernel: scsi host4: ahci May 14 23:52:19.044279 kernel: scsi host5: ahci May 14 23:52:19.044522 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 14 23:52:19.044538 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 14 23:52:19.044551 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 14 23:52:19.044563 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 14 23:52:19.044582 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 14 23:52:19.044595 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 14 23:52:19.065993 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 23:52:19.104596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:19.131601 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 23:52:19.143098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 23:52:19.145752 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 23:52:19.157749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:52:19.216677 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:52:19.219937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:52:19.237207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:19.364475 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 23:52:19.364561 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 23:52:19.365454 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 23:52:19.366458 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 23:52:19.367444 kernel: ata3.00: applying bridge limits May 14 23:52:19.367460 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 23:52:19.368446 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 23:52:19.369443 kernel: ata3.00: configured for UDMA/100 May 14 23:52:19.369466 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 23:52:19.380458 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 23:52:19.425576 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 23:52:19.425850 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:52:19.440453 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 23:52:19.546093 disk-uuid[569]: Primary Header is updated. May 14 23:52:19.546093 disk-uuid[569]: Secondary Entries is updated. May 14 23:52:19.546093 disk-uuid[569]: Secondary Header is updated. May 14 23:52:19.550470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:52:19.555458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:52:20.557464 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:52:20.557680 disk-uuid[581]: The operation has completed successfully. May 14 23:52:20.592403 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:52:20.592678 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:52:20.637771 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:52:20.641598 sh[594]: Success May 14 23:52:20.655475 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 23:52:20.692154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:52:20.705287 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:52:20.708779 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:52:20.720905 kernel: BTRFS info (device dm-0): first mount of filesystem 11358d57-dfa4-4197-9524-595753ed5512 May 14 23:52:20.720971 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 23:52:20.720983 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:52:20.722166 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:52:20.723012 kernel: BTRFS info (device dm-0): using free space tree May 14 23:52:20.729229 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:52:20.730609 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:52:20.738541 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:52:20.739680 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:52:20.758862 kernel: BTRFS info (device vda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 14 23:52:20.758919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:52:20.758931 kernel: BTRFS info (device vda6): using free space tree May 14 23:52:20.762446 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:52:20.767457 kernel: BTRFS info (device vda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 14 23:52:20.776263 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:52:20.781617 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:52:20.923069 ignition[687]: Ignition 2.20.0 May 14 23:52:20.923088 ignition[687]: Stage: fetch-offline May 14 23:52:20.923128 ignition[687]: no configs at "/usr/lib/ignition/base.d" May 14 23:52:20.923137 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:20.923233 ignition[687]: parsed url from cmdline: "" May 14 23:52:20.923237 ignition[687]: no config URL provided May 14 23:52:20.923242 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:52:20.923251 ignition[687]: no config at "/usr/lib/ignition/user.ign" May 14 23:52:20.923277 ignition[687]: op(1): [started] loading QEMU firmware config module May 14 23:52:20.923285 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 23:52:20.938689 ignition[687]: op(1): [finished] loading QEMU firmware config module May 14 23:52:20.938727 ignition[687]: QEMU firmware config was not found. Ignoring... May 14 23:52:20.954488 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:52:20.969589 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:52:20.985365 ignition[687]: parsing config with SHA512: 7ec6c6005559553871772087252312ec9476b0576840fa3ba0a54e0532c80cb86fd9b09ddcdadb6add28fe106229e2c0324deb3f9543a46383796d5cafca80c5 May 14 23:52:20.994604 unknown[687]: fetched base config from "system" May 14 23:52:20.994618 unknown[687]: fetched user config from "qemu" May 14 23:52:20.995894 ignition[687]: fetch-offline: fetch-offline passed May 14 23:52:20.996829 ignition[687]: Ignition finished successfully May 14 23:52:21.000796 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:52:21.003497 systemd-networkd[779]: lo: Link UP May 14 23:52:21.003508 systemd-networkd[779]: lo: Gained carrier May 14 23:52:21.005161 systemd-networkd[779]: Enumeration completed May 14 23:52:21.005533 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:21.005538 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:52:21.006465 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:52:21.006582 systemd-networkd[779]: eth0: Link UP May 14 23:52:21.006585 systemd-networkd[779]: eth0: Gained carrier May 14 23:52:21.006592 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:21.014586 systemd[1]: Reached target network.target - Network. May 14 23:52:21.019434 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 23:52:21.034533 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:52:21.036110 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:52:21.108483 ignition[783]: Ignition 2.20.0 May 14 23:52:21.108498 ignition[783]: Stage: kargs May 14 23:52:21.108665 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 14 23:52:21.108677 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:21.112640 ignition[783]: kargs: kargs passed May 14 23:52:21.112704 ignition[783]: Ignition finished successfully May 14 23:52:21.117618 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:52:21.126677 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:52:21.152099 ignition[792]: Ignition 2.20.0 May 14 23:52:21.152110 ignition[792]: Stage: disks May 14 23:52:21.152315 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 14 23:52:21.152327 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:21.153183 ignition[792]: disks: disks passed May 14 23:52:21.153228 ignition[792]: Ignition finished successfully May 14 23:52:21.158945 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:52:21.161047 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:52:21.161649 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:52:21.161983 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:52:21.162314 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:52:21.168116 systemd[1]: Reached target basic.target - Basic System. May 14 23:52:21.181559 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:52:21.196124 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.25 May 14 23:52:21.196139 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. May 14 23:52:21.198813 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:52:21.209346 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:52:21.222524 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:52:21.329468 kernel: EXT4-fs (vda9): mounted filesystem 36fdaeac-383d-468b-a0a4-9f47e3957a15 r/w with ordered data mode. Quota mode: none. May 14 23:52:21.330546 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:52:21.331764 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:52:21.347508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:52:21.349553 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:52:21.350407 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:52:21.350479 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:52:21.395975 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) May 14 23:52:21.395995 kernel: BTRFS info (device vda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 14 23:52:21.396007 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:52:21.396017 kernel: BTRFS info (device vda6): using free space tree May 14 23:52:21.350509 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:52:21.399444 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:52:21.400533 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:52:21.427470 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:52:21.430365 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:52:21.472150 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:52:21.477697 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory May 14 23:52:21.482194 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:52:21.486231 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:52:21.572162 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:52:21.580504 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:52:21.582685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:52:21.589451 kernel: BTRFS info (device vda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 14 23:52:21.609031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:52:21.641314 ignition[928]: INFO : Ignition 2.20.0 May 14 23:52:21.641314 ignition[928]: INFO : Stage: mount May 14 23:52:21.644785 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:52:21.644785 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:21.644785 ignition[928]: INFO : mount: mount passed May 14 23:52:21.644785 ignition[928]: INFO : Ignition finished successfully May 14 23:52:21.644764 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:52:21.661489 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:52:21.719799 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:52:21.736576 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:52:21.743455 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) May 14 23:52:21.743517 kernel: BTRFS info (device vda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 14 23:52:21.777972 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:52:21.778733 kernel: BTRFS info (device vda6): using free space tree May 14 23:52:21.781450 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:52:21.782918 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:52:21.832682 ignition[954]: INFO : Ignition 2.20.0 May 14 23:52:21.832682 ignition[954]: INFO : Stage: files May 14 23:52:21.842329 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:52:21.842329 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:21.842329 ignition[954]: DEBUG : files: compiled without relabeling support, skipping May 14 23:52:21.842329 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:52:21.842329 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:52:21.859991 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:52:21.909977 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:52:21.909977 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:52:21.909977 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 23:52:21.860848 unknown[954]: wrote ssh authorized keys file for user: core May 14 23:52:21.916255 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 23:52:21.970015 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:52:22.192252 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 23:52:22.192252 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:52:22.222188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 23:52:22.575939 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 23:52:22.703124 systemd-networkd[779]: eth0: Gained IPv6LL May 14 23:52:23.355239 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:52:23.355239 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 23:52:23.360008 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 23:52:23.378701 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:52:23.382737 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:52:23.384723 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 23:52:23.384723 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 23:52:23.387762 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:52:23.389342 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:52:23.391166 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:52:23.392847 ignition[954]: INFO : files: files passed May 14 23:52:23.393588 ignition[954]: INFO : Ignition finished successfully May 14 23:52:23.396318 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:52:23.404656 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:52:23.407744 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:52:23.410643 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:52:23.411669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:52:23.417458 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory May 14 23:52:23.420290 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:52:23.422218 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:52:23.425026 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:52:23.422901 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:52:23.425844 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:52:23.432623 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:52:23.457569 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:52:23.457737 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:52:23.460783 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:52:23.462561 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:52:23.464811 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:52:23.475643 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:52:23.491238 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:52:23.504673 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:52:23.516888 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:52:23.518333 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:52:23.520752 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:52:23.522927 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:52:23.523101 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:52:23.525251 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:52:23.527106 systemd[1]: Stopped target basic.target - Basic System. May 14 23:52:23.529173 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:52:23.531494 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:52:23.533607 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:52:23.535771 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:52:23.538035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:52:23.540573 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:52:23.542715 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:52:23.545061 systemd[1]: Stopped target swap.target - Swaps. May 14 23:52:23.546919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:52:23.547092 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:52:23.549355 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:52:23.551095 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:52:23.553224 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:52:23.553795 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:52:23.555555 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:52:23.555715 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:52:23.557942 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:52:23.558081 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:52:23.560088 systemd[1]: Stopped target paths.target - Path Units. May 14 23:52:23.561830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:52:23.567500 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:52:23.568978 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:52:23.570849 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:52:23.573027 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:52:23.573166 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:52:23.575982 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:52:23.576100 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:52:23.578123 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:52:23.578291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:52:23.580479 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:52:23.580627 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:52:23.598755 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:52:23.600889 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:52:23.601840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:52:23.601998 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:52:23.604085 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:52:23.604220 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:52:23.612050 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:52:23.612170 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:52:23.620486 ignition[1008]: INFO : Ignition 2.20.0 May 14 23:52:23.620486 ignition[1008]: INFO : Stage: umount May 14 23:52:23.622333 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:52:23.622333 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:23.622333 ignition[1008]: INFO : umount: umount passed May 14 23:52:23.622333 ignition[1008]: INFO : Ignition finished successfully May 14 23:52:23.628006 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:52:23.628182 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:52:23.628883 systemd[1]: Stopped target network.target - Network. May 14 23:52:23.630435 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:52:23.630500 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:52:23.632533 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:52:23.632597 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:52:23.632877 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:52:23.632939 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:52:23.636184 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:52:23.636238 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:52:23.640007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:52:23.644243 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:52:23.646165 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:52:23.655242 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:52:23.655388 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:52:23.659928 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:52:23.660181 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:52:23.660339 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:52:23.664944 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:52:23.665806 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:52:23.665878 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:52:23.677847 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:52:23.680205 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:52:23.681360 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:52:23.684103 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:52:23.684162 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:23.687279 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:52:23.687337 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:52:23.690358 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:52:23.690412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:52:23.693997 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:52:23.697500 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:52:23.698778 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:52:23.710987 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:52:23.751667 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:52:23.755167 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:52:23.756353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:52:23.759489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:52:23.760624 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:52:23.763048 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:52:23.763106 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:52:23.766589 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:52:23.767706 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:52:23.770387 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:52:23.770474 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:52:23.773870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:52:23.773947 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:23.790779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:52:23.793225 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:52:23.793323 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:52:23.797033 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 23:52:23.797106 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:52:23.800892 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:52:23.800967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:52:23.804393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:52:23.804526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:23.808637 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:52:23.810045 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:52:23.811894 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:52:23.813063 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:52:23.888857 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:52:23.889014 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:52:23.892031 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:52:23.894114 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:52:23.894203 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:52:23.911799 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:52:23.921559 systemd[1]: Switching root. May 14 23:52:23.955863 systemd-journald[194]: Journal stopped May 14 23:52:25.379395 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 14 23:52:25.379497 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:52:25.379516 kernel: SELinux: policy capability open_perms=1 May 14 23:52:25.379531 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:52:25.379550 kernel: SELinux: policy capability always_check_network=0 May 14 23:52:25.379565 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:52:25.379580 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:52:25.379596 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:52:25.379611 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:52:25.379626 kernel: audit: type=1403 audit(1747266744.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:52:25.379650 systemd[1]: Successfully loaded SELinux policy in 42.258ms. May 14 23:52:25.379676 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.305ms. May 14 23:52:25.379702 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:52:25.379721 systemd[1]: Detected virtualization kvm. May 14 23:52:25.379737 systemd[1]: Detected architecture x86-64. May 14 23:52:25.379753 systemd[1]: Detected first boot. May 14 23:52:25.379769 systemd[1]: Initializing machine ID from VM UUID. May 14 23:52:25.379784 zram_generator::config[1055]: No configuration found. May 14 23:52:25.379801 kernel: Guest personality initialized and is inactive May 14 23:52:25.379816 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 23:52:25.379831 kernel: Initialized host personality May 14 23:52:25.379849 kernel: NET: Registered PF_VSOCK protocol family May 14 23:52:25.379865 systemd[1]: Populated /etc with preset unit settings. May 14 23:52:25.379881 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:52:25.379897 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:52:25.379913 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:52:25.379929 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:52:25.379944 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:52:25.379962 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:52:25.379978 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:52:25.379997 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:52:25.380013 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:52:25.380029 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:52:25.380046 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:52:25.380061 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:52:25.380078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:52:25.380094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:52:25.380110 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:52:25.380129 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:52:25.380145 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:52:25.380169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:52:25.380185 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 23:52:25.380207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:52:25.380223 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:52:25.380246 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:52:25.380261 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:52:25.380281 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:52:25.380297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:52:25.380313 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:52:25.380331 systemd[1]: Reached target slices.target - Slice Units. May 14 23:52:25.380347 systemd[1]: Reached target swap.target - Swaps. May 14 23:52:25.380363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:52:25.380379 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:52:25.380394 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:52:25.380410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:52:25.380454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:52:25.380470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:52:25.380487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:52:25.380502 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:52:25.380519 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:52:25.380535 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:52:25.380551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:52:25.380567 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:52:25.380583 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:52:25.380603 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:52:25.380620 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:52:25.380636 systemd[1]: Reached target machines.target - Containers. May 14 23:52:25.380652 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:52:25.380668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:52:25.380684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:52:25.380711 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:52:25.380727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:52:25.380746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:52:25.380762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:52:25.380778 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:52:25.380794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:52:25.380811 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:52:25.380827 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:52:25.380843 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:52:25.380858 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:52:25.380874 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:52:25.380894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:52:25.380916 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:52:25.380953 systemd-journald[1119]: Collecting audit messages is disabled. May 14 23:52:25.380981 kernel: loop: module loaded May 14 23:52:25.380997 kernel: fuse: init (API version 7.39) May 14 23:52:25.381013 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:52:25.381028 systemd-journald[1119]: Journal started May 14 23:52:25.381061 systemd-journald[1119]: Runtime Journal (/run/log/journal/24bce47bd9384261826b9f9cb9c9dc2e) is 6M, max 48.4M, 42.3M free. May 14 23:52:25.123688 systemd[1]: Queued start job for default target multi-user.target. May 14 23:52:25.139235 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 23:52:25.139841 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:52:25.383461 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:52:25.387454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:52:25.393390 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:52:25.397437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:52:25.399451 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:52:25.400495 systemd[1]: Stopped verity-setup.service. May 14 23:52:25.403442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:52:25.407439 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:52:25.408715 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:52:25.410002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:52:25.411252 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:52:25.412349 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:52:25.413583 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:52:25.414790 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:52:25.416045 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:52:25.417734 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:52:25.417958 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:52:25.419406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:52:25.419632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:52:25.421058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:52:25.421284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:52:25.424137 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:52:25.424432 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:52:25.447951 kernel: ACPI: bus type drm_connector registered May 14 23:52:25.447722 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:52:25.447940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:52:25.449540 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:52:25.449790 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:52:25.451282 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:52:25.452941 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:52:25.454706 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:52:25.456323 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:52:25.470368 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:52:25.477523 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:52:25.480026 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:52:25.481265 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:52:25.481293 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:52:25.483432 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:52:25.486397 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:52:25.489241 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:52:25.516711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:52:25.518339 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:52:25.520594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:52:25.522067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:52:25.523930 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:52:25.524482 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:52:25.528631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:52:25.533535 systemd-journald[1119]: Time spent on flushing to /var/log/journal/24bce47bd9384261826b9f9cb9c9dc2e is 16.623ms for 965 entries. May 14 23:52:25.533535 systemd-journald[1119]: System Journal (/var/log/journal/24bce47bd9384261826b9f9cb9c9dc2e) is 8M, max 195.6M, 187.6M free. May 14 23:52:25.799158 systemd-journald[1119]: Received client request to flush runtime journal. May 14 23:52:25.799242 kernel: loop0: detected capacity change from 0 to 218376 May 14 23:52:25.799284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:52:25.799313 kernel: loop1: detected capacity change from 0 to 138176 May 14 23:52:25.799338 kernel: loop2: detected capacity change from 0 to 147912 May 14 23:52:25.539730 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:52:25.544505 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:52:25.548394 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:52:25.550067 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:52:25.551448 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:52:25.552983 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:52:25.566569 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:52:25.579631 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:52:25.613378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:25.620400 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. May 14 23:52:25.620413 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. May 14 23:52:25.626408 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:52:25.739752 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:52:25.741696 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:52:25.749575 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:52:25.801276 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:52:25.804126 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:52:25.808464 kernel: loop3: detected capacity change from 0 to 218376 May 14 23:52:25.815604 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:52:25.867449 kernel: loop4: detected capacity change from 0 to 138176 May 14 23:52:25.876224 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:52:25.889858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:52:25.895307 kernel: loop5: detected capacity change from 0 to 147912 May 14 23:52:25.892869 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:52:25.904937 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 23:52:25.905768 (sd-merge)[1193]: Merged extensions into '/usr'. May 14 23:52:25.912382 systemd[1]: Reload requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:52:25.912409 systemd[1]: Reloading... May 14 23:52:25.912789 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 14 23:52:25.912818 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 14 23:52:25.984455 zram_generator::config[1231]: No configuration found. May 14 23:52:26.097242 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:52:26.133820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:52:26.204361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:52:26.204703 systemd[1]: Reloading finished in 291 ms. May 14 23:52:26.227053 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:52:26.228973 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:52:26.231200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:52:26.248321 systemd[1]: Starting ensure-sysext.service... May 14 23:52:26.250812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:52:26.262978 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... May 14 23:52:26.262996 systemd[1]: Reloading... May 14 23:52:26.276846 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:52:26.277729 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:52:26.278897 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:52:26.279238 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 23:52:26.279341 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 23:52:26.284489 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:52:26.284500 systemd-tmpfiles[1270]: Skipping /boot May 14 23:52:26.304615 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:52:26.304629 systemd-tmpfiles[1270]: Skipping /boot May 14 23:52:26.316456 zram_generator::config[1299]: No configuration found. May 14 23:52:26.460733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:52:26.542243 systemd[1]: Reloading finished in 278 ms. May 14 23:52:26.564347 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:52:26.587720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:52:26.597754 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:52:26.600545 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:52:26.603191 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:52:26.607953 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:52:26.611700 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:52:26.614794 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:52:26.620551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:52:26.620746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:52:26.622184 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:52:26.628656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:52:26.633653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:52:26.634916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:52:26.635019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:52:26.637341 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:52:26.638638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:52:26.640183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:52:26.640429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:52:26.642233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:52:26.647531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:52:26.650927 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:52:26.652815 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:52:26.653026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:52:26.661142 systemd-udevd[1343]: Using default interface naming scheme 'v255'. May 14 23:52:26.666237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:52:26.675941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:52:26.676345 augenrules[1372]: No rules May 14 23:52:26.676238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:52:26.681583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:52:26.685350 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:52:26.688791 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:52:26.691649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:52:26.692869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:52:26.692907 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:52:26.695586 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:52:26.696700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:52:26.697609 systemd[1]: Finished ensure-sysext.service. May 14 23:52:26.698959 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:52:26.699227 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:52:26.700910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:52:26.702769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:52:26.702987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:52:26.704547 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:52:26.704772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:52:26.706803 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:52:26.709962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:52:26.710296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:52:26.714099 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:52:26.714337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:52:26.721872 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:52:26.726968 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:52:26.748708 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:52:26.774983 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:52:26.775082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:52:26.786693 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:52:26.788238 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:52:26.790239 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 23:52:26.814489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1411) May 14 23:52:26.972192 systemd-resolved[1341]: Positive Trust Anchors: May 14 23:52:26.972207 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:52:26.972239 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:52:26.978884 systemd-resolved[1341]: Defaulting to hostname 'linux'. May 14 23:52:26.979446 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 23:52:26.995442 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 14 23:52:27.019197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:52:27.020730 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:52:27.027341 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:52:27.046282 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:52:27.061438 kernel: ACPI: button: Power Button [PWRF] May 14 23:52:27.062661 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:52:27.063326 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:52:27.075736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:27.081228 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:52:27.094962 systemd-networkd[1412]: lo: Link UP May 14 23:52:27.094979 systemd-networkd[1412]: lo: Gained carrier May 14 23:52:27.097018 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 23:52:27.097269 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 23:52:27.097461 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 23:52:27.097503 systemd-networkd[1412]: Enumeration completed May 14 23:52:27.097574 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:52:27.098563 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:27.098570 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:52:27.103193 systemd[1]: Reached target network.target - Network. May 14 23:52:27.103680 systemd-networkd[1412]: eth0: Link UP May 14 23:52:27.103692 systemd-networkd[1412]: eth0: Gained carrier May 14 23:52:27.103714 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:27.114987 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:52:27.120062 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:52:27.126552 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:52:27.128319 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 23:52:27.128990 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 23:52:27.129050 systemd-timesyncd[1415]: Initial clock synchronization to Wed 2025-05-14 23:52:27.216488 UTC. May 14 23:52:27.132449 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:52:27.137171 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:52:27.147501 kernel: kvm_amd: TSC scaling supported May 14 23:52:27.147553 kernel: kvm_amd: Nested Virtualization enabled May 14 23:52:27.147567 kernel: kvm_amd: Nested Paging enabled May 14 23:52:27.147579 kernel: kvm_amd: LBR virtualization supported May 14 23:52:27.148583 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 23:52:27.148609 kernel: kvm_amd: Virtual GIF supported May 14 23:52:27.194159 kernel: EDAC MC: Ver: 3.0.0 May 14 23:52:27.216543 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:52:27.232802 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:52:27.234722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:27.289192 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:52:27.327000 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:52:27.328724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:52:27.329952 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:52:27.331254 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:52:27.332670 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:52:27.334237 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:52:27.335479 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:52:27.336761 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:52:27.338102 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:52:27.338133 systemd[1]: Reached target paths.target - Path Units. May 14 23:52:27.339076 systemd[1]: Reached target timers.target - Timer Units. May 14 23:52:27.341258 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:52:27.344714 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:52:27.348681 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:52:27.350254 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:52:27.351538 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:52:27.363405 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:52:27.365115 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:52:27.367772 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:52:27.369531 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:52:27.370789 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:52:27.371835 systemd[1]: Reached target basic.target - Basic System. May 14 23:52:27.372842 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:52:27.372873 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:52:27.373890 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:52:27.376112 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:52:27.377965 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:52:27.381820 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:52:27.385622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:52:27.386776 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:52:27.387827 jq[1451]: false May 14 23:52:27.388643 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:52:27.394521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:52:27.396799 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:52:27.399567 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:52:27.406900 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:52:27.408023 dbus-daemon[1450]: [system] SELinux support is enabled May 14 23:52:27.408901 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:52:27.409453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:52:27.409599 extend-filesystems[1452]: Found loop3 May 14 23:52:27.410863 extend-filesystems[1452]: Found loop4 May 14 23:52:27.412458 extend-filesystems[1452]: Found loop5 May 14 23:52:27.412458 extend-filesystems[1452]: Found sr0 May 14 23:52:27.412458 extend-filesystems[1452]: Found vda May 14 23:52:27.412458 extend-filesystems[1452]: Found vda1 May 14 23:52:27.412458 extend-filesystems[1452]: Found vda2 May 14 23:52:27.412458 extend-filesystems[1452]: Found vda3 May 14 23:52:27.412458 extend-filesystems[1452]: Found usr May 14 23:52:27.412458 extend-filesystems[1452]: Found vda4 May 14 23:52:27.412458 extend-filesystems[1452]: Found vda6 May 14 23:52:27.412458 extend-filesystems[1452]: Found vda7 May 14 23:52:27.412458 extend-filesystems[1452]: Found vda9 May 14 23:52:27.412458 extend-filesystems[1452]: Checking size of /dev/vda9 May 14 23:52:27.416594 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:52:27.422769 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:52:27.425439 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:52:27.434470 update_engine[1461]: I20250514 23:52:27.434334 1461 main.cc:92] Flatcar Update Engine starting May 14 23:52:27.434828 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:52:27.437563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:52:27.437858 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:52:27.438275 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:52:27.438531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:52:27.442434 jq[1467]: true May 14 23:52:27.442163 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:52:27.442682 update_engine[1461]: I20250514 23:52:27.441337 1461 update_check_scheduler.cc:74] Next update check in 5m45s May 14 23:52:27.443012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:52:27.450732 extend-filesystems[1452]: Resized partition /dev/vda9 May 14 23:52:27.456950 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:52:27.459053 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) May 14 23:52:27.465386 jq[1475]: true May 14 23:52:27.466445 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 23:52:27.479445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1393) May 14 23:52:27.492653 tar[1473]: linux-amd64/LICENSE May 14 23:52:27.492653 tar[1473]: linux-amd64/helm May 14 23:52:27.492257 systemd[1]: Started update-engine.service - Update Engine. May 14 23:52:27.493975 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:52:27.494214 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:52:27.495764 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:52:27.495784 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:52:27.500532 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 23:52:27.505713 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:52:27.559657 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 23:52:27.559657 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:52:27.559657 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 23:52:27.558812 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) May 14 23:52:27.567283 extend-filesystems[1452]: Resized filesystem in /dev/vda9 May 14 23:52:27.558846 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 23:52:27.559076 systemd-logind[1460]: New seat seat0. May 14 23:52:27.559950 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:52:27.565553 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:52:27.565884 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:52:27.578332 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:52:27.603366 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:52:27.610748 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:52:27.625611 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:52:27.633606 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:52:27.633915 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:52:27.640763 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:52:27.663117 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:52:27.673794 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:52:27.677181 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 23:52:27.678784 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:52:27.766216 bash[1504]: Updated "/home/core/.ssh/authorized_keys" May 14 23:52:27.768772 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:52:27.771693 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:52:28.001262 containerd[1476]: time="2025-05-14T23:52:28.001162936Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:52:28.028180 containerd[1476]: time="2025-05-14T23:52:28.028058057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:52:28.030085 containerd[1476]: time="2025-05-14T23:52:28.030044264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:52:28.030085 containerd[1476]: time="2025-05-14T23:52:28.030070889Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:52:28.030142 containerd[1476]: time="2025-05-14T23:52:28.030087157Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:52:28.030326 containerd[1476]: time="2025-05-14T23:52:28.030303992Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:52:28.030365 containerd[1476]: time="2025-05-14T23:52:28.030324976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:52:28.030448 containerd[1476]: time="2025-05-14T23:52:28.030418308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:52:28.030470 containerd[1476]: time="2025-05-14T23:52:28.030446080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:52:28.030768 containerd[1476]: time="2025-05-14T23:52:28.030740200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:52:28.030768 containerd[1476]: time="2025-05-14T23:52:28.030757618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:52:28.030826 containerd[1476]: time="2025-05-14T23:52:28.030769796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:52:28.030826 containerd[1476]: time="2025-05-14T23:52:28.030809719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:52:28.030943 containerd[1476]: time="2025-05-14T23:52:28.030922805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:52:28.031207 containerd[1476]: time="2025-05-14T23:52:28.031185959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:52:28.031374 containerd[1476]: time="2025-05-14T23:52:28.031354047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:52:28.031374 containerd[1476]: time="2025-05-14T23:52:28.031369933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:52:28.031531 containerd[1476]: time="2025-05-14T23:52:28.031511155Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:52:28.031621 containerd[1476]: time="2025-05-14T23:52:28.031603369Z" level=info msg="metadata content store policy set" policy=shared May 14 23:52:28.053142 containerd[1476]: time="2025-05-14T23:52:28.053116795Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:52:28.053194 containerd[1476]: time="2025-05-14T23:52:28.053172260Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:52:28.053194 containerd[1476]: time="2025-05-14T23:52:28.053189244Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:52:28.053270 containerd[1476]: time="2025-05-14T23:52:28.053206570Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:52:28.053295 containerd[1476]: time="2025-05-14T23:52:28.053275898Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:52:28.053496 containerd[1476]: time="2025-05-14T23:52:28.053464194Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:52:28.053768 containerd[1476]: time="2025-05-14T23:52:28.053747676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:52:28.053914 containerd[1476]: time="2025-05-14T23:52:28.053893360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:52:28.053949 containerd[1476]: time="2025-05-14T23:52:28.053917829Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:52:28.053979 containerd[1476]: time="2025-05-14T23:52:28.053947969Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:52:28.053979 containerd[1476]: time="2025-05-14T23:52:28.053965468Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054035 containerd[1476]: time="2025-05-14T23:52:28.053980256Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054035 containerd[1476]: time="2025-05-14T23:52:28.053994963Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054035 containerd[1476]: time="2025-05-14T23:52:28.054010497Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054111 containerd[1476]: time="2025-05-14T23:52:28.054034310Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054111 containerd[1476]: time="2025-05-14T23:52:28.054051074Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054111 containerd[1476]: time="2025-05-14T23:52:28.054068420Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054111 containerd[1476]: time="2025-05-14T23:52:28.054081979Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:52:28.054111 containerd[1476]: time="2025-05-14T23:52:28.054111314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054221 containerd[1476]: time="2025-05-14T23:52:28.054128338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054221 containerd[1476]: time="2025-05-14T23:52:28.054146007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054221 containerd[1476]: time="2025-05-14T23:52:28.054161943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054221 containerd[1476]: time="2025-05-14T23:52:28.054179401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054221 containerd[1476]: time="2025-05-14T23:52:28.054196033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054221 containerd[1476]: time="2025-05-14T23:52:28.054211213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054221 containerd[1476]: time="2025-05-14T23:52:28.054225719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054241747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054265863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054282636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054299278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054312605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054328370Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054351700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054371485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054382 containerd[1476]: time="2025-05-14T23:52:28.054385729Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054459650Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054480079Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054491875Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054505837Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054517654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054533428Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054548499Z" level=info msg="NRI interface is disabled by configuration." May 14 23:52:28.054650 containerd[1476]: time="2025-05-14T23:52:28.054560960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:52:28.055051 containerd[1476]: time="2025-05-14T23:52:28.054922523Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:52:28.055051 containerd[1476]: time="2025-05-14T23:52:28.054990449Z" level=info msg="Connect containerd service" May 14 23:52:28.055051 containerd[1476]: time="2025-05-14T23:52:28.055029152Z" level=info msg="using legacy CRI server" May 14 23:52:28.055051 containerd[1476]: time="2025-05-14T23:52:28.055037845Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:52:28.055388 containerd[1476]: time="2025-05-14T23:52:28.055163141Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:52:28.055835 containerd[1476]: time="2025-05-14T23:52:28.055809083Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:52:28.056126 containerd[1476]: time="2025-05-14T23:52:28.056018151Z" level=info msg="Start subscribing containerd event" May 14 23:52:28.056126 containerd[1476]: time="2025-05-14T23:52:28.056066051Z" level=info msg="Start recovering state" May 14 23:52:28.056188 containerd[1476]: time="2025-05-14T23:52:28.056150942Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:52:28.056257 containerd[1476]: time="2025-05-14T23:52:28.056230251Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:52:28.056549 containerd[1476]: time="2025-05-14T23:52:28.056270778Z" level=info msg="Start event monitor" May 14 23:52:28.056549 containerd[1476]: time="2025-05-14T23:52:28.056284084Z" level=info msg="Start snapshots syncer" May 14 23:52:28.056549 containerd[1476]: time="2025-05-14T23:52:28.056293171Z" level=info msg="Start cni network conf syncer for default" May 14 23:52:28.056549 containerd[1476]: time="2025-05-14T23:52:28.056303849Z" level=info msg="Start streaming server" May 14 23:52:28.056474 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:52:28.057882 containerd[1476]: time="2025-05-14T23:52:28.057597787Z" level=info msg="containerd successfully booted in 0.057909s" May 14 23:52:28.256224 tar[1473]: linux-amd64/README.md May 14 23:52:28.272956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:52:28.605776 systemd-networkd[1412]: eth0: Gained IPv6LL May 14 23:52:28.609380 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:52:28.611607 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:52:28.628708 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 23:52:28.631855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:28.634344 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:52:28.655681 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 23:52:28.656148 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 23:52:28.657830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:52:28.661249 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:52:29.325366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:29.327008 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:52:29.328242 systemd[1]: Startup finished in 860ms (kernel) + 6.769s (initrd) + 4.867s (userspace) = 12.497s. May 14 23:52:29.355831 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:30.009824 kubelet[1564]: E0514 23:52:30.009679 1564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:30.014746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:30.014964 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:30.015401 systemd[1]: kubelet.service: Consumed 1.221s CPU time, 253.4M memory peak. May 14 23:52:31.500604 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:52:31.509743 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:50586.service - OpenSSH per-connection server daemon (10.0.0.1:50586). May 14 23:52:31.571491 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 50586 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:52:31.574398 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:31.581643 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:52:31.594880 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:52:31.604319 systemd-logind[1460]: New session 1 of user core. May 14 23:52:31.610968 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:52:31.623833 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:52:31.627096 (systemd)[1581]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:52:31.629572 systemd-logind[1460]: New session c1 of user core. May 14 23:52:31.788264 systemd[1581]: Queued start job for default target default.target. May 14 23:52:31.799995 systemd[1581]: Created slice app.slice - User Application Slice. May 14 23:52:31.800027 systemd[1581]: Reached target paths.target - Paths. May 14 23:52:31.800074 systemd[1581]: Reached target timers.target - Timers. May 14 23:52:31.801908 systemd[1581]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:52:31.814121 systemd[1581]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:52:31.814292 systemd[1581]: Reached target sockets.target - Sockets. May 14 23:52:31.814352 systemd[1581]: Reached target basic.target - Basic System. May 14 23:52:31.814411 systemd[1581]: Reached target default.target - Main User Target. May 14 23:52:31.814475 systemd[1581]: Startup finished in 177ms. May 14 23:52:31.814868 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:52:31.816760 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:52:31.885814 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:50588.service - OpenSSH per-connection server daemon (10.0.0.1:50588). May 14 23:52:31.931513 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 50588 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:52:31.933568 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:31.938170 systemd-logind[1460]: New session 2 of user core. May 14 23:52:31.947589 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:52:32.001334 sshd[1594]: Connection closed by 10.0.0.1 port 50588 May 14 23:52:32.001818 sshd-session[1592]: pam_unix(sshd:session): session closed for user core May 14 23:52:32.010448 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:50588.service: Deactivated successfully. May 14 23:52:32.012300 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:52:32.013988 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. May 14 23:52:32.021747 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:50592.service - OpenSSH per-connection server daemon (10.0.0.1:50592). May 14 23:52:32.022867 systemd-logind[1460]: Removed session 2. May 14 23:52:32.066986 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 50592 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:52:32.068741 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:32.073638 systemd-logind[1460]: New session 3 of user core. May 14 23:52:32.086661 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:52:32.137759 sshd[1602]: Connection closed by 10.0.0.1 port 50592 May 14 23:52:32.138201 sshd-session[1599]: pam_unix(sshd:session): session closed for user core May 14 23:52:32.148224 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:50592.service: Deactivated successfully. May 14 23:52:32.150192 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:52:32.152036 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. May 14 23:52:32.161915 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). May 14 23:52:32.162987 systemd-logind[1460]: Removed session 3. May 14 23:52:32.206788 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:52:32.208472 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:32.213556 systemd-logind[1460]: New session 4 of user core. May 14 23:52:32.223710 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:52:32.280611 sshd[1610]: Connection closed by 10.0.0.1 port 50606 May 14 23:52:32.281000 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 14 23:52:32.295807 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:50606.service: Deactivated successfully. May 14 23:52:32.298406 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:52:32.300398 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. May 14 23:52:32.313936 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:50610.service - OpenSSH per-connection server daemon (10.0.0.1:50610). May 14 23:52:32.315302 systemd-logind[1460]: Removed session 4. May 14 23:52:32.354450 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 50610 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:52:32.356259 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:32.361010 systemd-logind[1460]: New session 5 of user core. May 14 23:52:32.370595 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:52:32.433063 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:52:32.433491 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:32.454816 sudo[1619]: pam_unix(sudo:session): session closed for user root May 14 23:52:32.456826 sshd[1618]: Connection closed by 10.0.0.1 port 50610 May 14 23:52:32.457443 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 14 23:52:32.469274 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:50610.service: Deactivated successfully. May 14 23:52:32.471370 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:52:32.473462 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. May 14 23:52:32.488814 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:50626.service - OpenSSH per-connection server daemon (10.0.0.1:50626). May 14 23:52:32.490010 systemd-logind[1460]: Removed session 5. May 14 23:52:32.535374 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 50626 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:52:32.537272 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:32.542632 systemd-logind[1460]: New session 6 of user core. May 14 23:52:32.552670 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:52:32.608510 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:52:32.608909 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:32.613212 sudo[1629]: pam_unix(sudo:session): session closed for user root May 14 23:52:32.620113 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:52:32.620481 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:32.642761 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:52:32.677520 augenrules[1651]: No rules May 14 23:52:32.679664 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:52:32.679960 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:52:32.681390 sudo[1628]: pam_unix(sudo:session): session closed for user root May 14 23:52:32.683218 sshd[1627]: Connection closed by 10.0.0.1 port 50626 May 14 23:52:32.683667 sshd-session[1624]: pam_unix(sshd:session): session closed for user core May 14 23:52:32.699176 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:50626.service: Deactivated successfully. May 14 23:52:32.701252 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:52:32.704618 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. May 14 23:52:32.711750 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:50638.service - OpenSSH per-connection server daemon (10.0.0.1:50638). May 14 23:52:32.712870 systemd-logind[1460]: Removed session 6. May 14 23:52:32.756295 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 50638 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:52:32.757906 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:32.762561 systemd-logind[1460]: New session 7 of user core. May 14 23:52:32.778581 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:52:32.834173 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:52:32.834584 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:33.757774 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:52:33.757918 (dockerd)[1682]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:52:34.302089 dockerd[1682]: time="2025-05-14T23:52:34.301981587Z" level=info msg="Starting up" May 14 23:52:34.844852 dockerd[1682]: time="2025-05-14T23:52:34.844779582Z" level=info msg="Loading containers: start." May 14 23:52:35.038461 kernel: Initializing XFRM netlink socket May 14 23:52:35.127011 systemd-networkd[1412]: docker0: Link UP May 14 23:52:35.166505 dockerd[1682]: time="2025-05-14T23:52:35.166450752Z" level=info msg="Loading containers: done." May 14 23:52:35.181715 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3873279169-merged.mount: Deactivated successfully. May 14 23:52:35.184922 dockerd[1682]: time="2025-05-14T23:52:35.184862438Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:52:35.185060 dockerd[1682]: time="2025-05-14T23:52:35.185025951Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:52:35.185223 dockerd[1682]: time="2025-05-14T23:52:35.185192491Z" level=info msg="Daemon has completed initialization" May 14 23:52:35.297876 dockerd[1682]: time="2025-05-14T23:52:35.297777261Z" level=info msg="API listen on /run/docker.sock" May 14 23:52:35.298018 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:52:36.325708 containerd[1476]: time="2025-05-14T23:52:36.325656232Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 23:52:37.746517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628559962.mount: Deactivated successfully. May 14 23:52:40.265882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:52:40.284678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:40.555651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:40.566199 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:41.051991 kubelet[1892]: E0514 23:52:41.051440 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:41.061956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:41.062675 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:41.063135 systemd[1]: kubelet.service: Consumed 271ms CPU time, 106.2M memory peak. May 14 23:52:42.974712 containerd[1476]: time="2025-05-14T23:52:42.974634299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:42.975759 containerd[1476]: time="2025-05-14T23:52:42.975670311Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 14 23:52:42.977273 containerd[1476]: time="2025-05-14T23:52:42.977231136Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:42.981418 containerd[1476]: time="2025-05-14T23:52:42.981340750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:42.982655 containerd[1476]: time="2025-05-14T23:52:42.982616491Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 6.656918517s" May 14 23:52:42.982695 containerd[1476]: time="2025-05-14T23:52:42.982659205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 23:52:42.983528 containerd[1476]: time="2025-05-14T23:52:42.983460678Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 23:52:45.169912 containerd[1476]: time="2025-05-14T23:52:45.169835604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:45.229925 containerd[1476]: time="2025-05-14T23:52:45.229832628Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 14 23:52:45.250772 containerd[1476]: time="2025-05-14T23:52:45.250729877Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:45.272447 containerd[1476]: time="2025-05-14T23:52:45.269627685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:45.272447 containerd[1476]: time="2025-05-14T23:52:45.271609630Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.288116323s" May 14 23:52:45.272447 containerd[1476]: time="2025-05-14T23:52:45.271656524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 23:52:45.272934 containerd[1476]: time="2025-05-14T23:52:45.272853578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 23:52:50.462170 containerd[1476]: time="2025-05-14T23:52:50.462025661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:50.463682 containerd[1476]: time="2025-05-14T23:52:50.463632214Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 14 23:52:50.467768 containerd[1476]: time="2025-05-14T23:52:50.467729132Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:50.471006 containerd[1476]: time="2025-05-14T23:52:50.470943872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:50.472432 containerd[1476]: time="2025-05-14T23:52:50.472361883Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 5.199471022s" May 14 23:52:50.472494 containerd[1476]: time="2025-05-14T23:52:50.472440384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 23:52:50.473128 containerd[1476]: time="2025-05-14T23:52:50.473095787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 23:52:51.313073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:52:51.322822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:51.566221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:51.571391 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:51.625401 kubelet[1966]: E0514 23:52:51.625310 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:51.630371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:51.630627 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:51.631055 systemd[1]: kubelet.service: Consumed 297ms CPU time, 102.7M memory peak. May 14 23:52:56.378696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449758642.mount: Deactivated successfully. May 14 23:53:01.881317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:53:01.891672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:04.171320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:04.176472 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:53:04.314927 kubelet[1991]: E0514 23:53:04.314862 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:53:04.319275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:53:04.319514 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:53:04.319911 systemd[1]: kubelet.service: Consumed 231ms CPU time, 102.9M memory peak. May 14 23:53:04.952677 containerd[1476]: time="2025-05-14T23:53:04.952597320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:04.954228 containerd[1476]: time="2025-05-14T23:53:04.954159757Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 14 23:53:04.957418 containerd[1476]: time="2025-05-14T23:53:04.957356151Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:04.961618 containerd[1476]: time="2025-05-14T23:53:04.961575953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:04.962198 containerd[1476]: time="2025-05-14T23:53:04.962149988Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 14.489021734s" May 14 23:53:04.962250 containerd[1476]: time="2025-05-14T23:53:04.962195377Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 23:53:04.962908 containerd[1476]: time="2025-05-14T23:53:04.962857673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 23:53:06.828591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817049724.mount: Deactivated successfully. May 14 23:53:09.534709 containerd[1476]: time="2025-05-14T23:53:09.534619627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:09.545960 containerd[1476]: time="2025-05-14T23:53:09.545869116Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 14 23:53:09.555614 containerd[1476]: time="2025-05-14T23:53:09.555564396Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:09.575825 containerd[1476]: time="2025-05-14T23:53:09.575745998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:09.577754 containerd[1476]: time="2025-05-14T23:53:09.577691281Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.614789033s" May 14 23:53:09.579941 containerd[1476]: time="2025-05-14T23:53:09.578866210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 23:53:09.580505 containerd[1476]: time="2025-05-14T23:53:09.580472545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:53:10.454284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901902116.mount: Deactivated successfully. May 14 23:53:10.463676 containerd[1476]: time="2025-05-14T23:53:10.463637516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:10.464587 containerd[1476]: time="2025-05-14T23:53:10.464545809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 23:53:10.465761 containerd[1476]: time="2025-05-14T23:53:10.465731112Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:10.468839 containerd[1476]: time="2025-05-14T23:53:10.468806798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:10.469640 containerd[1476]: time="2025-05-14T23:53:10.469614306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 889.103945ms" May 14 23:53:10.469702 containerd[1476]: time="2025-05-14T23:53:10.469647992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 23:53:10.470375 containerd[1476]: time="2025-05-14T23:53:10.470353832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 23:53:11.129035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493379381.mount: Deactivated successfully. May 14 23:53:12.844611 update_engine[1461]: I20250514 23:53:12.844493 1461 update_attempter.cc:509] Updating boot flags... May 14 23:53:12.951483 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2114) May 14 23:53:13.039708 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2114) May 14 23:53:13.104464 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2114) May 14 23:53:14.010448 containerd[1476]: time="2025-05-14T23:53:14.010221752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:14.013251 containerd[1476]: time="2025-05-14T23:53:14.013204224Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 14 23:53:14.016259 containerd[1476]: time="2025-05-14T23:53:14.016222472Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:14.021226 containerd[1476]: time="2025-05-14T23:53:14.021168027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:14.022940 containerd[1476]: time="2025-05-14T23:53:14.022891853Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.552506552s" May 14 23:53:14.022940 containerd[1476]: time="2025-05-14T23:53:14.022934826Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 23:53:14.374547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 23:53:14.383872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:14.566302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:14.572667 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:53:14.645829 kubelet[2148]: E0514 23:53:14.645469 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:53:14.650972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:53:14.651194 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:53:14.651628 systemd[1]: kubelet.service: Consumed 252ms CPU time, 104.3M memory peak. May 14 23:53:17.038707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:17.038941 systemd[1]: kubelet.service: Consumed 252ms CPU time, 104.3M memory peak. May 14 23:53:17.050784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:17.082443 systemd[1]: Reload requested from client PID 2171 ('systemctl') (unit session-7.scope)... May 14 23:53:17.082461 systemd[1]: Reloading... May 14 23:53:17.182792 zram_generator::config[2224]: No configuration found. May 14 23:53:18.194963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:53:18.313229 systemd[1]: Reloading finished in 1230 ms. May 14 23:53:18.365197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:18.369046 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:18.371199 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:53:18.371514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:18.371573 systemd[1]: kubelet.service: Consumed 159ms CPU time, 91.9M memory peak. May 14 23:53:18.373604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:18.570151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:18.575502 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:53:18.619379 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:18.619379 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:53:18.619379 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:18.619847 kubelet[2265]: I0514 23:53:18.619468 2265 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:53:18.981224 kubelet[2265]: I0514 23:53:18.981176 2265 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:53:18.981224 kubelet[2265]: I0514 23:53:18.981206 2265 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:53:18.981488 kubelet[2265]: I0514 23:53:18.981466 2265 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:53:19.039978 kubelet[2265]: E0514 23:53:19.039931 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:19.040246 kubelet[2265]: I0514 23:53:19.040227 2265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:53:19.825166 kubelet[2265]: E0514 23:53:19.825116 2265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:53:19.825166 kubelet[2265]: I0514 23:53:19.825154 2265 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:53:19.831877 kubelet[2265]: I0514 23:53:19.831832 2265 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:53:19.834869 kubelet[2265]: I0514 23:53:19.834810 2265 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:53:19.835096 kubelet[2265]: I0514 23:53:19.834862 2265 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:53:19.835232 kubelet[2265]: I0514 23:53:19.835098 2265 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:53:19.835232 kubelet[2265]: I0514 23:53:19.835111 2265 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:53:19.835347 kubelet[2265]: I0514 23:53:19.835324 2265 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:19.839395 kubelet[2265]: I0514 23:53:19.839359 2265 kubelet.go:446] "Attempting to sync node with API server" May 14 23:53:19.839395 kubelet[2265]: I0514 23:53:19.839388 2265 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:53:19.839505 kubelet[2265]: I0514 23:53:19.839411 2265 kubelet.go:352] "Adding apiserver pod source" May 14 23:53:19.839505 kubelet[2265]: I0514 23:53:19.839439 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:53:19.842940 kubelet[2265]: I0514 23:53:19.842906 2265 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:53:19.843900 kubelet[2265]: W0514 23:53:19.843243 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:19.843900 kubelet[2265]: E0514 23:53:19.843302 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:19.843900 kubelet[2265]: I0514 23:53:19.843372 2265 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:53:19.844942 kubelet[2265]: W0514 23:53:19.844352 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:53:19.845698 kubelet[2265]: W0514 23:53:19.845628 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:19.845698 kubelet[2265]: E0514 23:53:19.845699 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:19.849181 kubelet[2265]: I0514 23:53:19.849138 2265 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:53:19.849249 kubelet[2265]: I0514 23:53:19.849207 2265 server.go:1287] "Started kubelet" May 14 23:53:19.850305 kubelet[2265]: I0514 23:53:19.850251 2265 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:53:19.850468 kubelet[2265]: I0514 23:53:19.850401 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:53:19.851559 kubelet[2265]: I0514 23:53:19.851540 2265 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:53:19.852096 kubelet[2265]: I0514 23:53:19.852075 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:53:19.852461 kubelet[2265]: I0514 23:53:19.852443 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:53:19.853174 kubelet[2265]: I0514 23:53:19.853113 2265 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:53:19.853284 kubelet[2265]: E0514 23:53:19.853227 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:53:19.854011 kubelet[2265]: I0514 23:53:19.853958 2265 server.go:490] "Adding debug handlers to kubelet server" May 14 23:53:19.855613 kubelet[2265]: E0514 23:53:19.854629 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" May 14 23:53:19.855767 kubelet[2265]: I0514 23:53:19.855743 2265 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:53:19.855811 kubelet[2265]: I0514 23:53:19.855803 2265 reconciler.go:26] "Reconciler: start to sync state" May 14 23:53:19.856326 kubelet[2265]: W0514 23:53:19.856264 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:19.856438 kubelet[2265]: E0514 23:53:19.856328 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:19.856965 kubelet[2265]: I0514 23:53:19.856940 2265 factory.go:221] Registration of the systemd container factory successfully May 14 23:53:19.857101 kubelet[2265]: I0514 23:53:19.857046 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:53:19.857144 kubelet[2265]: E0514 23:53:19.855407 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f89e5a4d7e2f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 23:53:19.849169648 +0000 UTC m=+1.269374178,LastTimestamp:2025-05-14 23:53:19.849169648 +0000 UTC m=+1.269374178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 23:53:19.858561 kubelet[2265]: I0514 23:53:19.858505 2265 factory.go:221] Registration of the containerd container factory successfully May 14 23:53:19.858775 kubelet[2265]: E0514 23:53:19.858734 2265 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:53:19.873185 kubelet[2265]: I0514 23:53:19.873121 2265 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:53:19.873185 kubelet[2265]: I0514 23:53:19.873148 2265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:53:19.873185 kubelet[2265]: I0514 23:53:19.873204 2265 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:19.878181 kubelet[2265]: I0514 23:53:19.878101 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:53:19.879604 kubelet[2265]: I0514 23:53:19.879576 2265 policy_none.go:49] "None policy: Start" May 14 23:53:19.879647 kubelet[2265]: I0514 23:53:19.879607 2265 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:53:19.879647 kubelet[2265]: I0514 23:53:19.879621 2265 state_mem.go:35] "Initializing new in-memory state store" May 14 23:53:19.879769 kubelet[2265]: I0514 23:53:19.879746 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:53:19.879798 kubelet[2265]: I0514 23:53:19.879783 2265 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:53:19.879823 kubelet[2265]: I0514 23:53:19.879813 2265 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:53:19.879823 kubelet[2265]: I0514 23:53:19.879823 2265 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:53:19.879917 kubelet[2265]: E0514 23:53:19.879893 2265 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:53:19.881575 kubelet[2265]: W0514 23:53:19.880881 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:19.881575 kubelet[2265]: E0514 23:53:19.880948 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:19.887997 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:53:19.912036 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:53:19.916060 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:53:19.930876 kubelet[2265]: I0514 23:53:19.930827 2265 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:53:19.931347 kubelet[2265]: I0514 23:53:19.931317 2265 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:53:19.931396 kubelet[2265]: I0514 23:53:19.931337 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:53:19.931949 kubelet[2265]: I0514 23:53:19.931683 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:53:19.934459 kubelet[2265]: E0514 23:53:19.933592 2265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:53:19.934459 kubelet[2265]: E0514 23:53:19.933664 2265 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 23:53:19.989681 systemd[1]: Created slice kubepods-burstable-poda5d2960e61a8bb64f697231a5873128a.slice - libcontainer container kubepods-burstable-poda5d2960e61a8bb64f697231a5873128a.slice. May 14 23:53:19.997333 kubelet[2265]: E0514 23:53:19.997291 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:19.999037 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 14 23:53:20.013142 kubelet[2265]: E0514 23:53:20.013093 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:20.016495 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 14 23:53:20.018389 kubelet[2265]: E0514 23:53:20.018352 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:20.033599 kubelet[2265]: I0514 23:53:20.033573 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:53:20.034070 kubelet[2265]: E0514 23:53:20.034019 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 14 23:53:20.055910 kubelet[2265]: E0514 23:53:20.055873 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" May 14 23:53:20.056940 kubelet[2265]: I0514 23:53:20.056910 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:20.057004 kubelet[2265]: I0514 23:53:20.056945 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5d2960e61a8bb64f697231a5873128a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5d2960e61a8bb64f697231a5873128a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:20.057004 kubelet[2265]: I0514 23:53:20.056970 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5d2960e61a8bb64f697231a5873128a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5d2960e61a8bb64f697231a5873128a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:20.057004 kubelet[2265]: I0514 23:53:20.056991 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5d2960e61a8bb64f697231a5873128a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a5d2960e61a8bb64f697231a5873128a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:20.057103 kubelet[2265]: I0514 23:53:20.057013 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:20.057103 kubelet[2265]: I0514 23:53:20.057032 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:20.057103 kubelet[2265]: I0514 23:53:20.057053 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:20.057103 kubelet[2265]: I0514 23:53:20.057073 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 23:53:20.057103 kubelet[2265]: I0514 23:53:20.057093 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:20.236232 kubelet[2265]: I0514 23:53:20.236172 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:53:20.236695 kubelet[2265]: E0514 23:53:20.236647 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 14 23:53:20.299732 containerd[1476]: time="2025-05-14T23:53:20.299672769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a5d2960e61a8bb64f697231a5873128a,Namespace:kube-system,Attempt:0,}" May 14 23:53:20.314620 containerd[1476]: time="2025-05-14T23:53:20.314563447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 14 23:53:20.319921 containerd[1476]: time="2025-05-14T23:53:20.319869030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 14 23:53:20.456832 kubelet[2265]: E0514 23:53:20.456774 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" May 14 23:53:20.639323 kubelet[2265]: I0514 23:53:20.639281 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:53:20.640104 kubelet[2265]: E0514 23:53:20.640053 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 14 23:53:20.859262 kubelet[2265]: W0514 23:53:20.859199 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:20.859262 kubelet[2265]: E0514 23:53:20.859254 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:21.151822 kubelet[2265]: W0514 23:53:21.151775 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:21.151949 kubelet[2265]: E0514 23:53:21.151830 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:21.235796 kubelet[2265]: E0514 23:53:21.235752 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:21.257715 kubelet[2265]: E0514 23:53:21.257672 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="1.6s" May 14 23:53:21.292362 kubelet[2265]: W0514 23:53:21.292331 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:21.292463 kubelet[2265]: E0514 23:53:21.292363 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:21.347463 kubelet[2265]: W0514 23:53:21.347376 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused May 14 23:53:21.347553 kubelet[2265]: E0514 23:53:21.347475 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:21.441682 kubelet[2265]: I0514 23:53:21.441567 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:53:21.441977 kubelet[2265]: E0514 23:53:21.441944 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 14 23:53:22.624133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957759805.mount: Deactivated successfully. May 14 23:53:22.635085 containerd[1476]: time="2025-05-14T23:53:22.635001649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:22.638452 containerd[1476]: time="2025-05-14T23:53:22.638366485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 14 23:53:22.640923 containerd[1476]: time="2025-05-14T23:53:22.640870711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:22.644770 containerd[1476]: time="2025-05-14T23:53:22.644729265Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:22.645731 containerd[1476]: time="2025-05-14T23:53:22.645652972Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:53:22.646635 containerd[1476]: time="2025-05-14T23:53:22.646601530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:22.647917 containerd[1476]: time="2025-05-14T23:53:22.647852396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:53:22.648908 containerd[1476]: time="2025-05-14T23:53:22.648882090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:22.651217 containerd[1476]: time="2025-05-14T23:53:22.651174603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.336485506s" May 14 23:53:22.652094 containerd[1476]: time="2025-05-14T23:53:22.652060554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.352278368s" May 14 23:53:22.664293 containerd[1476]: time="2025-05-14T23:53:22.664225548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.344251862s" May 14 23:53:22.802874 containerd[1476]: time="2025-05-14T23:53:22.801515851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:22.802874 containerd[1476]: time="2025-05-14T23:53:22.802824926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:22.802874 containerd[1476]: time="2025-05-14T23:53:22.802840027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:22.803070 containerd[1476]: time="2025-05-14T23:53:22.802919289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:22.803737 containerd[1476]: time="2025-05-14T23:53:22.803117524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:22.803737 containerd[1476]: time="2025-05-14T23:53:22.803177737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:22.803737 containerd[1476]: time="2025-05-14T23:53:22.803191205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:22.803737 containerd[1476]: time="2025-05-14T23:53:22.803260967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:22.806128 containerd[1476]: time="2025-05-14T23:53:22.804762274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:22.806128 containerd[1476]: time="2025-05-14T23:53:22.804906148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:22.806128 containerd[1476]: time="2025-05-14T23:53:22.804918804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:22.806128 containerd[1476]: time="2025-05-14T23:53:22.804991684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:22.835716 systemd[1]: Started cri-containerd-44f0fbb00d19d19ff2edea7503c9b64ad97befa4aadf7f2bf5cf20e1dcf49fde.scope - libcontainer container 44f0fbb00d19d19ff2edea7503c9b64ad97befa4aadf7f2bf5cf20e1dcf49fde. May 14 23:53:22.840637 systemd[1]: Started cri-containerd-7fd0003c14063cfb58d8b96e9f326f61923166e1f713690b69d4cfc1c7008440.scope - libcontainer container 7fd0003c14063cfb58d8b96e9f326f61923166e1f713690b69d4cfc1c7008440. May 14 23:53:22.843530 systemd[1]: Started cri-containerd-b34f7098125f4fad475500566d7af053b6e757325840b65647ce355796fe54ad.scope - libcontainer container b34f7098125f4fad475500566d7af053b6e757325840b65647ce355796fe54ad. May 14 23:53:22.859537 kubelet[2265]: E0514 23:53:22.859108 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="3.2s" May 14 23:53:22.881284 containerd[1476]: time="2025-05-14T23:53:22.880580802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a5d2960e61a8bb64f697231a5873128a,Namespace:kube-system,Attempt:0,} returns sandbox id \"44f0fbb00d19d19ff2edea7503c9b64ad97befa4aadf7f2bf5cf20e1dcf49fde\"" May 14 23:53:22.888896 containerd[1476]: time="2025-05-14T23:53:22.888848384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fd0003c14063cfb58d8b96e9f326f61923166e1f713690b69d4cfc1c7008440\"" May 14 23:53:22.889555 containerd[1476]: time="2025-05-14T23:53:22.889509916Z" level=info msg="CreateContainer within sandbox \"44f0fbb00d19d19ff2edea7503c9b64ad97befa4aadf7f2bf5cf20e1dcf49fde\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:53:22.893460 containerd[1476]: time="2025-05-14T23:53:22.893126755Z" level=info msg="CreateContainer within sandbox \"7fd0003c14063cfb58d8b96e9f326f61923166e1f713690b69d4cfc1c7008440\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:53:22.898901 containerd[1476]: time="2025-05-14T23:53:22.898853666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"b34f7098125f4fad475500566d7af053b6e757325840b65647ce355796fe54ad\"" May 14 23:53:22.901372 containerd[1476]: time="2025-05-14T23:53:22.901343123Z" level=info msg="CreateContainer within sandbox \"b34f7098125f4fad475500566d7af053b6e757325840b65647ce355796fe54ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:53:22.932855 containerd[1476]: time="2025-05-14T23:53:22.932796537Z" level=info msg="CreateContainer within sandbox \"44f0fbb00d19d19ff2edea7503c9b64ad97befa4aadf7f2bf5cf20e1dcf49fde\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b574f7036d1b81fee9b41ad0a8c3782dc3f149ef91c663e014bf0bd308a1735\"" May 14 23:53:22.933529 containerd[1476]: time="2025-05-14T23:53:22.933506137Z" level=info msg="StartContainer for \"3b574f7036d1b81fee9b41ad0a8c3782dc3f149ef91c663e014bf0bd308a1735\"" May 14 23:53:22.938541 containerd[1476]: time="2025-05-14T23:53:22.938499819Z" level=info msg="CreateContainer within sandbox \"7fd0003c14063cfb58d8b96e9f326f61923166e1f713690b69d4cfc1c7008440\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e12cb10deadc7d43062f18e210dc19b46647ea5ce5ef821ee0f322f604ce7765\"" May 14 23:53:22.939239 containerd[1476]: time="2025-05-14T23:53:22.939204017Z" level=info msg="StartContainer for \"e12cb10deadc7d43062f18e210dc19b46647ea5ce5ef821ee0f322f604ce7765\"" May 14 23:53:22.941839 containerd[1476]: time="2025-05-14T23:53:22.941732153Z" level=info msg="CreateContainer within sandbox \"b34f7098125f4fad475500566d7af053b6e757325840b65647ce355796fe54ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"602e6b3a09df0327b26ae62a005f7cb32454f381f071d683c8a749ed3ce69650\"" May 14 23:53:22.942437 containerd[1476]: time="2025-05-14T23:53:22.942404747Z" level=info msg="StartContainer for \"602e6b3a09df0327b26ae62a005f7cb32454f381f071d683c8a749ed3ce69650\"" May 14 23:53:22.962642 systemd[1]: Started cri-containerd-3b574f7036d1b81fee9b41ad0a8c3782dc3f149ef91c663e014bf0bd308a1735.scope - libcontainer container 3b574f7036d1b81fee9b41ad0a8c3782dc3f149ef91c663e014bf0bd308a1735. May 14 23:53:22.973590 systemd[1]: Started cri-containerd-e12cb10deadc7d43062f18e210dc19b46647ea5ce5ef821ee0f322f604ce7765.scope - libcontainer container e12cb10deadc7d43062f18e210dc19b46647ea5ce5ef821ee0f322f604ce7765. May 14 23:53:22.977605 systemd[1]: Started cri-containerd-602e6b3a09df0327b26ae62a005f7cb32454f381f071d683c8a749ed3ce69650.scope - libcontainer container 602e6b3a09df0327b26ae62a005f7cb32454f381f071d683c8a749ed3ce69650. May 14 23:53:23.022802 containerd[1476]: time="2025-05-14T23:53:23.022758728Z" level=info msg="StartContainer for \"3b574f7036d1b81fee9b41ad0a8c3782dc3f149ef91c663e014bf0bd308a1735\" returns successfully" May 14 23:53:23.044572 kubelet[2265]: I0514 23:53:23.044520 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:53:23.045075 kubelet[2265]: E0514 23:53:23.045033 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" May 14 23:53:23.065765 containerd[1476]: time="2025-05-14T23:53:23.065689202Z" level=info msg="StartContainer for \"e12cb10deadc7d43062f18e210dc19b46647ea5ce5ef821ee0f322f604ce7765\" returns successfully" May 14 23:53:23.065924 containerd[1476]: time="2025-05-14T23:53:23.065799637Z" level=info msg="StartContainer for \"602e6b3a09df0327b26ae62a005f7cb32454f381f071d683c8a749ed3ce69650\" returns successfully" May 14 23:53:23.895142 kubelet[2265]: E0514 23:53:23.895100 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:23.898088 kubelet[2265]: E0514 23:53:23.897917 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:23.899584 kubelet[2265]: E0514 23:53:23.899560 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:24.469745 kubelet[2265]: E0514 23:53:24.469693 2265 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 14 23:53:24.816753 kubelet[2265]: E0514 23:53:24.816625 2265 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 14 23:53:24.846230 kubelet[2265]: I0514 23:53:24.846174 2265 apiserver.go:52] "Watching apiserver" May 14 23:53:24.856479 kubelet[2265]: I0514 23:53:24.856451 2265 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:53:24.901180 kubelet[2265]: E0514 23:53:24.901154 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:24.901656 kubelet[2265]: E0514 23:53:24.901253 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:25.256777 kubelet[2265]: E0514 23:53:25.256706 2265 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 14 23:53:25.902569 kubelet[2265]: E0514 23:53:25.902532 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:53:26.129448 kubelet[2265]: E0514 23:53:26.129365 2265 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 23:53:26.246676 kubelet[2265]: I0514 23:53:26.246544 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:53:26.256569 kubelet[2265]: I0514 23:53:26.256518 2265 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 23:53:26.354698 kubelet[2265]: I0514 23:53:26.354638 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:53:26.365499 kubelet[2265]: I0514 23:53:26.365458 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 23:53:26.370445 kubelet[2265]: I0514 23:53:26.370397 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:53:27.533367 systemd[1]: Reload requested from client PID 2546 ('systemctl') (unit session-7.scope)... May 14 23:53:27.533385 systemd[1]: Reloading... May 14 23:53:27.646462 zram_generator::config[2590]: No configuration found. May 14 23:53:27.790053 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:53:27.944311 systemd[1]: Reloading finished in 410 ms. May 14 23:53:27.969073 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:27.985739 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:53:27.986109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:27.986170 systemd[1]: kubelet.service: Consumed 1.038s CPU time, 127.4M memory peak. May 14 23:53:27.991834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:28.211545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:28.216105 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:53:28.272483 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:28.272483 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:53:28.272483 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:28.272905 kubelet[2635]: I0514 23:53:28.272536 2635 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:53:28.279630 kubelet[2635]: I0514 23:53:28.279584 2635 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:53:28.279630 kubelet[2635]: I0514 23:53:28.279622 2635 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:53:28.279996 kubelet[2635]: I0514 23:53:28.279968 2635 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:53:28.281610 kubelet[2635]: I0514 23:53:28.281583 2635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:53:28.285409 kubelet[2635]: I0514 23:53:28.285360 2635 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:53:28.291042 kubelet[2635]: E0514 23:53:28.290981 2635 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:53:28.291042 kubelet[2635]: I0514 23:53:28.291026 2635 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:53:28.297858 kubelet[2635]: I0514 23:53:28.297818 2635 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:53:28.298184 kubelet[2635]: I0514 23:53:28.298133 2635 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:53:28.298406 kubelet[2635]: I0514 23:53:28.298176 2635 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:53:28.298521 kubelet[2635]: I0514 23:53:28.298414 2635 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:53:28.298521 kubelet[2635]: I0514 23:53:28.298449 2635 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:53:28.298521 kubelet[2635]: I0514 23:53:28.298516 2635 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:28.298781 kubelet[2635]: I0514 23:53:28.298755 2635 kubelet.go:446] "Attempting to sync node with API server" May 14 23:53:28.298781 kubelet[2635]: I0514 23:53:28.298774 2635 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:53:28.298836 kubelet[2635]: I0514 23:53:28.298792 2635 kubelet.go:352] "Adding apiserver pod source" May 14 23:53:28.298836 kubelet[2635]: I0514 23:53:28.298804 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:53:28.300279 kubelet[2635]: I0514 23:53:28.300245 2635 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:53:28.305482 kubelet[2635]: I0514 23:53:28.305443 2635 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:53:28.308106 kubelet[2635]: I0514 23:53:28.306074 2635 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:53:28.308106 kubelet[2635]: I0514 23:53:28.306111 2635 server.go:1287] "Started kubelet" May 14 23:53:28.308106 kubelet[2635]: I0514 23:53:28.306411 2635 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:53:28.308106 kubelet[2635]: I0514 23:53:28.306489 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:53:28.308106 kubelet[2635]: I0514 23:53:28.306785 2635 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:53:28.308106 kubelet[2635]: I0514 23:53:28.307411 2635 server.go:490] "Adding debug handlers to kubelet server" May 14 23:53:28.313564 kubelet[2635]: I0514 23:53:28.313534 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:53:28.314473 kubelet[2635]: I0514 23:53:28.314439 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:53:28.318361 kubelet[2635]: I0514 23:53:28.318328 2635 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:53:28.318504 kubelet[2635]: I0514 23:53:28.318486 2635 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:53:28.318719 kubelet[2635]: I0514 23:53:28.318700 2635 reconciler.go:26] "Reconciler: start to sync state" May 14 23:53:28.321403 kubelet[2635]: I0514 23:53:28.320663 2635 factory.go:221] Registration of the containerd container factory successfully May 14 23:53:28.322113 kubelet[2635]: I0514 23:53:28.321501 2635 factory.go:221] Registration of the systemd container factory successfully May 14 23:53:28.322113 kubelet[2635]: I0514 23:53:28.321600 2635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:53:28.331602 kubelet[2635]: E0514 23:53:28.331552 2635 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:53:28.339568 kubelet[2635]: I0514 23:53:28.339224 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:53:28.343121 kubelet[2635]: I0514 23:53:28.342513 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:53:28.343121 kubelet[2635]: I0514 23:53:28.342557 2635 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:53:28.343121 kubelet[2635]: I0514 23:53:28.342578 2635 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:53:28.343121 kubelet[2635]: I0514 23:53:28.342585 2635 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:53:28.343121 kubelet[2635]: E0514 23:53:28.342633 2635 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:53:28.381987 kubelet[2635]: I0514 23:53:28.381933 2635 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:53:28.381987 kubelet[2635]: I0514 23:53:28.381968 2635 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:53:28.381987 kubelet[2635]: I0514 23:53:28.381989 2635 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:28.382217 kubelet[2635]: I0514 23:53:28.382139 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:53:28.382217 kubelet[2635]: I0514 23:53:28.382149 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:53:28.382217 kubelet[2635]: I0514 23:53:28.382172 2635 policy_none.go:49] "None policy: Start" May 14 23:53:28.382217 kubelet[2635]: I0514 23:53:28.382181 2635 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:53:28.382217 kubelet[2635]: I0514 23:53:28.382191 2635 state_mem.go:35] "Initializing new in-memory state store" May 14 23:53:28.382381 kubelet[2635]: I0514 23:53:28.382281 2635 state_mem.go:75] "Updated machine memory state" May 14 23:53:28.387721 kubelet[2635]: I0514 23:53:28.387276 2635 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:53:28.387721 kubelet[2635]: I0514 23:53:28.387464 2635 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:53:28.387721 kubelet[2635]: I0514 23:53:28.387476 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:53:28.387721 kubelet[2635]: I0514 23:53:28.387639 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:53:28.388666 kubelet[2635]: E0514 23:53:28.388631 2635 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:53:28.444329 kubelet[2635]: I0514 23:53:28.443528 2635 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:53:28.444329 kubelet[2635]: I0514 23:53:28.443587 2635 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:53:28.444661 kubelet[2635]: I0514 23:53:28.444600 2635 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 23:53:28.451877 kubelet[2635]: E0514 23:53:28.451835 2635 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:53:28.452646 kubelet[2635]: E0514 23:53:28.452601 2635 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 23:53:28.455382 kubelet[2635]: E0514 23:53:28.452768 2635 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 23:53:28.493949 kubelet[2635]: I0514 23:53:28.493784 2635 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:53:28.503295 kubelet[2635]: I0514 23:53:28.503192 2635 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 14 23:53:28.503295 kubelet[2635]: I0514 23:53:28.503303 2635 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 23:53:28.519927 kubelet[2635]: I0514 23:53:28.519853 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5d2960e61a8bb64f697231a5873128a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5d2960e61a8bb64f697231a5873128a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:28.519927 kubelet[2635]: I0514 23:53:28.519912 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:28.520148 kubelet[2635]: I0514 23:53:28.519944 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:28.520148 kubelet[2635]: I0514 23:53:28.519988 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5d2960e61a8bb64f697231a5873128a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5d2960e61a8bb64f697231a5873128a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:28.520148 kubelet[2635]: I0514 23:53:28.520021 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5d2960e61a8bb64f697231a5873128a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a5d2960e61a8bb64f697231a5873128a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:28.520148 kubelet[2635]: I0514 23:53:28.520051 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:28.520148 kubelet[2635]: I0514 23:53:28.520079 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:28.520301 kubelet[2635]: I0514 23:53:28.520148 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:28.520301 kubelet[2635]: I0514 23:53:28.520247 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 23:53:29.300276 kubelet[2635]: I0514 23:53:29.300214 2635 apiserver.go:52] "Watching apiserver" May 14 23:53:29.319580 kubelet[2635]: I0514 23:53:29.319527 2635 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:53:29.357000 kubelet[2635]: I0514 23:53:29.356942 2635 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:53:29.366457 kubelet[2635]: E0514 23:53:29.365765 2635 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:53:29.395581 kubelet[2635]: I0514 23:53:29.395506 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.39548194 podStartE2EDuration="3.39548194s" podCreationTimestamp="2025-05-14 23:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:29.386222565 +0000 UTC m=+1.165453100" watchObservedRunningTime="2025-05-14 23:53:29.39548194 +0000 UTC m=+1.174712475" May 14 23:53:29.395810 kubelet[2635]: I0514 23:53:29.395632 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.395626737 podStartE2EDuration="3.395626737s" podCreationTimestamp="2025-05-14 23:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:29.395453213 +0000 UTC m=+1.174683748" watchObservedRunningTime="2025-05-14 23:53:29.395626737 +0000 UTC m=+1.174857272" May 14 23:53:29.416161 kubelet[2635]: I0514 23:53:29.416100 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.41607949 podStartE2EDuration="3.41607949s" podCreationTimestamp="2025-05-14 23:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:29.406776358 +0000 UTC m=+1.186006893" watchObservedRunningTime="2025-05-14 23:53:29.41607949 +0000 UTC m=+1.195310026" May 14 23:53:32.566744 kubelet[2635]: I0514 23:53:32.566674 2635 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:53:32.567203 containerd[1476]: time="2025-05-14T23:53:32.567135247Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:53:32.567479 kubelet[2635]: I0514 23:53:32.567387 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:53:33.312071 sudo[1663]: pam_unix(sudo:session): session closed for user root May 14 23:53:33.313783 sshd[1662]: Connection closed by 10.0.0.1 port 50638 May 14 23:53:33.314506 sshd-session[1659]: pam_unix(sshd:session): session closed for user core May 14 23:53:33.318405 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:50638.service: Deactivated successfully. May 14 23:53:33.322235 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:53:33.322658 systemd[1]: session-7.scope: Consumed 5.841s CPU time, 208.7M memory peak. May 14 23:53:33.325397 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. May 14 23:53:33.326661 systemd-logind[1460]: Removed session 7. May 14 23:53:33.429220 systemd[1]: Created slice kubepods-besteffort-pod81bb1714_3ce1_4cd9_8bb5_43a7a40bd6c2.slice - libcontainer container kubepods-besteffort-pod81bb1714_3ce1_4cd9_8bb5_43a7a40bd6c2.slice. May 14 23:53:33.446024 kubelet[2635]: I0514 23:53:33.445955 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2-kube-proxy\") pod \"kube-proxy-rg5rg\" (UID: \"81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2\") " pod="kube-system/kube-proxy-rg5rg" May 14 23:53:33.446024 kubelet[2635]: I0514 23:53:33.446010 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2-xtables-lock\") pod \"kube-proxy-rg5rg\" (UID: \"81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2\") " pod="kube-system/kube-proxy-rg5rg" May 14 23:53:33.446024 kubelet[2635]: I0514 23:53:33.446033 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2-lib-modules\") pod \"kube-proxy-rg5rg\" (UID: \"81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2\") " pod="kube-system/kube-proxy-rg5rg" May 14 23:53:33.446267 kubelet[2635]: I0514 23:53:33.446062 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sc7p\" (UniqueName: \"kubernetes.io/projected/81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2-kube-api-access-4sc7p\") pod \"kube-proxy-rg5rg\" (UID: \"81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2\") " pod="kube-system/kube-proxy-rg5rg" May 14 23:53:33.647591 kubelet[2635]: I0514 23:53:33.647543 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/abba8c1f-08bc-4ec5-8208-16d4ff37d97f-var-lib-calico\") pod \"tigera-operator-789496d6f5-7nnpq\" (UID: \"abba8c1f-08bc-4ec5-8208-16d4ff37d97f\") " pod="tigera-operator/tigera-operator-789496d6f5-7nnpq" May 14 23:53:33.647591 kubelet[2635]: I0514 23:53:33.647593 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcfb\" (UniqueName: \"kubernetes.io/projected/abba8c1f-08bc-4ec5-8208-16d4ff37d97f-kube-api-access-lpcfb\") pod \"tigera-operator-789496d6f5-7nnpq\" (UID: \"abba8c1f-08bc-4ec5-8208-16d4ff37d97f\") " pod="tigera-operator/tigera-operator-789496d6f5-7nnpq" May 14 23:53:33.652004 systemd[1]: Created slice kubepods-besteffort-podabba8c1f_08bc_4ec5_8208_16d4ff37d97f.slice - libcontainer container kubepods-besteffort-podabba8c1f_08bc_4ec5_8208_16d4ff37d97f.slice. May 14 23:53:33.740855 containerd[1476]: time="2025-05-14T23:53:33.740647304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg5rg,Uid:81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2,Namespace:kube-system,Attempt:0,}" May 14 23:53:33.770053 containerd[1476]: time="2025-05-14T23:53:33.769836023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:33.770053 containerd[1476]: time="2025-05-14T23:53:33.769893467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:33.770053 containerd[1476]: time="2025-05-14T23:53:33.769903487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:33.770053 containerd[1476]: time="2025-05-14T23:53:33.770003043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:33.795717 systemd[1]: Started cri-containerd-52125d22efcbfbc60a6fd2b52fb4f91b4e8684d1df449d71695cfb430b96e063.scope - libcontainer container 52125d22efcbfbc60a6fd2b52fb4f91b4e8684d1df449d71695cfb430b96e063. May 14 23:53:33.823645 containerd[1476]: time="2025-05-14T23:53:33.823593509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg5rg,Uid:81bb1714-3ce1-4cd9-8bb5-43a7a40bd6c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"52125d22efcbfbc60a6fd2b52fb4f91b4e8684d1df449d71695cfb430b96e063\"" May 14 23:53:33.826368 containerd[1476]: time="2025-05-14T23:53:33.826331462Z" level=info msg="CreateContainer within sandbox \"52125d22efcbfbc60a6fd2b52fb4f91b4e8684d1df449d71695cfb430b96e063\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:53:33.849174 containerd[1476]: time="2025-05-14T23:53:33.849113555Z" level=info msg="CreateContainer within sandbox \"52125d22efcbfbc60a6fd2b52fb4f91b4e8684d1df449d71695cfb430b96e063\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24a194cc3915fb0a58da6d75370da6bed7f1bac2b9b3d7c4fac1102bf33aca2b\"" May 14 23:53:33.850067 containerd[1476]: time="2025-05-14T23:53:33.850019536Z" level=info msg="StartContainer for \"24a194cc3915fb0a58da6d75370da6bed7f1bac2b9b3d7c4fac1102bf33aca2b\"" May 14 23:53:33.885747 systemd[1]: Started cri-containerd-24a194cc3915fb0a58da6d75370da6bed7f1bac2b9b3d7c4fac1102bf33aca2b.scope - libcontainer container 24a194cc3915fb0a58da6d75370da6bed7f1bac2b9b3d7c4fac1102bf33aca2b. May 14 23:53:33.925543 containerd[1476]: time="2025-05-14T23:53:33.923341061Z" level=info msg="StartContainer for \"24a194cc3915fb0a58da6d75370da6bed7f1bac2b9b3d7c4fac1102bf33aca2b\" returns successfully" May 14 23:53:33.956014 containerd[1476]: time="2025-05-14T23:53:33.955958170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-7nnpq,Uid:abba8c1f-08bc-4ec5-8208-16d4ff37d97f,Namespace:tigera-operator,Attempt:0,}" May 14 23:53:33.995056 containerd[1476]: time="2025-05-14T23:53:33.994197575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:33.995056 containerd[1476]: time="2025-05-14T23:53:33.994850064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:33.995056 containerd[1476]: time="2025-05-14T23:53:33.994862328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:33.995056 containerd[1476]: time="2025-05-14T23:53:33.994954811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:34.021683 systemd[1]: Started cri-containerd-0b2090912fbcafaccf6a8efe850ec85d809d11790e7d7f8e2f8bc34dba571e43.scope - libcontainer container 0b2090912fbcafaccf6a8efe850ec85d809d11790e7d7f8e2f8bc34dba571e43. May 14 23:53:34.064275 containerd[1476]: time="2025-05-14T23:53:34.064221742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-7nnpq,Uid:abba8c1f-08bc-4ec5-8208-16d4ff37d97f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b2090912fbcafaccf6a8efe850ec85d809d11790e7d7f8e2f8bc34dba571e43\"" May 14 23:53:34.069018 containerd[1476]: time="2025-05-14T23:53:34.068791293Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 23:53:34.386720 kubelet[2635]: I0514 23:53:34.386638 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rg5rg" podStartSLOduration=1.386615009 podStartE2EDuration="1.386615009s" podCreationTimestamp="2025-05-14 23:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:34.384944611 +0000 UTC m=+6.164175156" watchObservedRunningTime="2025-05-14 23:53:34.386615009 +0000 UTC m=+6.165845544" May 14 23:53:35.735658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311378261.mount: Deactivated successfully. May 14 23:53:36.083508 containerd[1476]: time="2025-05-14T23:53:36.083381922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:36.084310 containerd[1476]: time="2025-05-14T23:53:36.084270622Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 23:53:36.085461 containerd[1476]: time="2025-05-14T23:53:36.085433953Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:36.087762 containerd[1476]: time="2025-05-14T23:53:36.087712238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:36.088241 containerd[1476]: time="2025-05-14T23:53:36.088209297Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.019378646s" May 14 23:53:36.088241 containerd[1476]: time="2025-05-14T23:53:36.088235719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 23:53:36.090124 containerd[1476]: time="2025-05-14T23:53:36.090103717Z" level=info msg="CreateContainer within sandbox \"0b2090912fbcafaccf6a8efe850ec85d809d11790e7d7f8e2f8bc34dba571e43\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 23:53:36.108188 containerd[1476]: time="2025-05-14T23:53:36.108143166Z" level=info msg="CreateContainer within sandbox \"0b2090912fbcafaccf6a8efe850ec85d809d11790e7d7f8e2f8bc34dba571e43\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"85a973abc3991e8d37369618b3b3cc3f3616f67319dc145c911f2ee78cf7ea8f\"" May 14 23:53:36.110048 containerd[1476]: time="2025-05-14T23:53:36.108921979Z" level=info msg="StartContainer for \"85a973abc3991e8d37369618b3b3cc3f3616f67319dc145c911f2ee78cf7ea8f\"" May 14 23:53:36.137603 systemd[1]: Started cri-containerd-85a973abc3991e8d37369618b3b3cc3f3616f67319dc145c911f2ee78cf7ea8f.scope - libcontainer container 85a973abc3991e8d37369618b3b3cc3f3616f67319dc145c911f2ee78cf7ea8f. May 14 23:53:36.165884 containerd[1476]: time="2025-05-14T23:53:36.165844307Z" level=info msg="StartContainer for \"85a973abc3991e8d37369618b3b3cc3f3616f67319dc145c911f2ee78cf7ea8f\" returns successfully" May 14 23:53:36.382533 kubelet[2635]: I0514 23:53:36.382447 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-7nnpq" podStartSLOduration=1.359206866 podStartE2EDuration="3.382386612s" podCreationTimestamp="2025-05-14 23:53:33 +0000 UTC" firstStartedPulling="2025-05-14 23:53:34.065868232 +0000 UTC m=+5.845098767" lastFinishedPulling="2025-05-14 23:53:36.089047978 +0000 UTC m=+7.868278513" observedRunningTime="2025-05-14 23:53:36.381968317 +0000 UTC m=+8.161198852" watchObservedRunningTime="2025-05-14 23:53:36.382386612 +0000 UTC m=+8.161617147" May 14 23:53:39.257043 systemd[1]: Created slice kubepods-besteffort-pod57d370ed_0c7d_480d_9ba6_6e05b35be5fd.slice - libcontainer container kubepods-besteffort-pod57d370ed_0c7d_480d_9ba6_6e05b35be5fd.slice. May 14 23:53:39.326377 systemd[1]: Created slice kubepods-besteffort-podfdf4415a_e137_4fff_9d5b_2e3fc5b57c18.slice - libcontainer container kubepods-besteffort-podfdf4415a_e137_4fff_9d5b_2e3fc5b57c18.slice. May 14 23:53:39.355199 kubelet[2635]: E0514 23:53:39.354968 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:39.381798 kubelet[2635]: I0514 23:53:39.381454 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57d370ed-0c7d-480d-9ba6-6e05b35be5fd-tigera-ca-bundle\") pod \"calico-typha-7f54b9d5cf-vfhmr\" (UID: \"57d370ed-0c7d-480d-9ba6-6e05b35be5fd\") " pod="calico-system/calico-typha-7f54b9d5cf-vfhmr" May 14 23:53:39.381798 kubelet[2635]: I0514 23:53:39.381509 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/57d370ed-0c7d-480d-9ba6-6e05b35be5fd-typha-certs\") pod \"calico-typha-7f54b9d5cf-vfhmr\" (UID: \"57d370ed-0c7d-480d-9ba6-6e05b35be5fd\") " pod="calico-system/calico-typha-7f54b9d5cf-vfhmr" May 14 23:53:39.381798 kubelet[2635]: I0514 23:53:39.381536 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgcr\" (UniqueName: \"kubernetes.io/projected/57d370ed-0c7d-480d-9ba6-6e05b35be5fd-kube-api-access-8lgcr\") pod \"calico-typha-7f54b9d5cf-vfhmr\" (UID: \"57d370ed-0c7d-480d-9ba6-6e05b35be5fd\") " pod="calico-system/calico-typha-7f54b9d5cf-vfhmr" May 14 23:53:39.482146 kubelet[2635]: I0514 23:53:39.482090 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3b238b4-7acc-401a-8dae-17e6c81aeb42-socket-dir\") pod \"csi-node-driver-zx5hz\" (UID: \"c3b238b4-7acc-401a-8dae-17e6c81aeb42\") " pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:39.482146 kubelet[2635]: I0514 23:53:39.482148 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-lib-modules\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482334 kubelet[2635]: I0514 23:53:39.482167 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3b238b4-7acc-401a-8dae-17e6c81aeb42-varrun\") pod \"csi-node-driver-zx5hz\" (UID: \"c3b238b4-7acc-401a-8dae-17e6c81aeb42\") " pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:39.482334 kubelet[2635]: I0514 23:53:39.482211 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-var-run-calico\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482334 kubelet[2635]: I0514 23:53:39.482231 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxtx\" (UniqueName: \"kubernetes.io/projected/c3b238b4-7acc-401a-8dae-17e6c81aeb42-kube-api-access-9bxtx\") pod \"csi-node-driver-zx5hz\" (UID: \"c3b238b4-7acc-401a-8dae-17e6c81aeb42\") " pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:39.482334 kubelet[2635]: I0514 23:53:39.482254 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-cni-net-dir\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482334 kubelet[2635]: I0514 23:53:39.482281 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-policysync\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482478 kubelet[2635]: I0514 23:53:39.482301 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-cni-log-dir\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482478 kubelet[2635]: I0514 23:53:39.482317 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-tigera-ca-bundle\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482478 kubelet[2635]: I0514 23:53:39.482336 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-node-certs\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482478 kubelet[2635]: I0514 23:53:39.482358 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-cni-bin-dir\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482478 kubelet[2635]: I0514 23:53:39.482374 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3b238b4-7acc-401a-8dae-17e6c81aeb42-kubelet-dir\") pod \"csi-node-driver-zx5hz\" (UID: \"c3b238b4-7acc-401a-8dae-17e6c81aeb42\") " pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:39.482592 kubelet[2635]: I0514 23:53:39.482394 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-xtables-lock\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482592 kubelet[2635]: I0514 23:53:39.482410 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-var-lib-calico\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482592 kubelet[2635]: I0514 23:53:39.482444 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwkh6\" (UniqueName: \"kubernetes.io/projected/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-kube-api-access-lwkh6\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.482592 kubelet[2635]: I0514 23:53:39.482465 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3b238b4-7acc-401a-8dae-17e6c81aeb42-registration-dir\") pod \"csi-node-driver-zx5hz\" (UID: \"c3b238b4-7acc-401a-8dae-17e6c81aeb42\") " pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:39.482592 kubelet[2635]: I0514 23:53:39.482495 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fdf4415a-e137-4fff-9d5b-2e3fc5b57c18-flexvol-driver-host\") pod \"calico-node-hznl7\" (UID: \"fdf4415a-e137-4fff-9d5b-2e3fc5b57c18\") " pod="calico-system/calico-node-hznl7" May 14 23:53:39.560212 containerd[1476]: time="2025-05-14T23:53:39.560079543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f54b9d5cf-vfhmr,Uid:57d370ed-0c7d-480d-9ba6-6e05b35be5fd,Namespace:calico-system,Attempt:0,}" May 14 23:53:39.682545 kubelet[2635]: E0514 23:53:39.682125 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:39.682545 kubelet[2635]: W0514 23:53:39.682152 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:39.682545 kubelet[2635]: E0514 23:53:39.682182 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:39.685660 kubelet[2635]: E0514 23:53:39.685331 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:39.685660 kubelet[2635]: W0514 23:53:39.685347 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:39.685660 kubelet[2635]: E0514 23:53:39.685367 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:39.685660 kubelet[2635]: E0514 23:53:39.685572 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:39.685660 kubelet[2635]: W0514 23:53:39.685582 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:39.685660 kubelet[2635]: E0514 23:53:39.685591 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:39.686541 kubelet[2635]: E0514 23:53:39.686526 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:39.686541 kubelet[2635]: W0514 23:53:39.686538 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:39.686653 kubelet[2635]: E0514 23:53:39.686549 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:39.765890 containerd[1476]: time="2025-05-14T23:53:39.765794964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:39.765890 containerd[1476]: time="2025-05-14T23:53:39.765863528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:39.765890 containerd[1476]: time="2025-05-14T23:53:39.765876794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:39.766068 containerd[1476]: time="2025-05-14T23:53:39.765965317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:39.789583 systemd[1]: Started cri-containerd-7945b3692b1bbacf780b90107cca94be02a47e1216f5471a36a1f0da9d34af71.scope - libcontainer container 7945b3692b1bbacf780b90107cca94be02a47e1216f5471a36a1f0da9d34af71. May 14 23:53:39.829992 containerd[1476]: time="2025-05-14T23:53:39.829873867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f54b9d5cf-vfhmr,Uid:57d370ed-0c7d-480d-9ba6-6e05b35be5fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"7945b3692b1bbacf780b90107cca94be02a47e1216f5471a36a1f0da9d34af71\"" May 14 23:53:39.831825 containerd[1476]: time="2025-05-14T23:53:39.831796699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 23:53:39.931387 containerd[1476]: time="2025-05-14T23:53:39.931340249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hznl7,Uid:fdf4415a-e137-4fff-9d5b-2e3fc5b57c18,Namespace:calico-system,Attempt:0,}" May 14 23:53:40.042054 containerd[1476]: time="2025-05-14T23:53:40.041946594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:40.042054 containerd[1476]: time="2025-05-14T23:53:40.042011190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:40.042054 containerd[1476]: time="2025-05-14T23:53:40.042039386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:40.042291 containerd[1476]: time="2025-05-14T23:53:40.042129192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:40.063554 systemd[1]: Started cri-containerd-b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081.scope - libcontainer container b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081. May 14 23:53:40.087590 containerd[1476]: time="2025-05-14T23:53:40.087484984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hznl7,Uid:fdf4415a-e137-4fff-9d5b-2e3fc5b57c18,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081\"" May 14 23:53:40.688068 kubelet[2635]: E0514 23:53:40.688012 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.688068 kubelet[2635]: W0514 23:53:40.688046 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.688068 kubelet[2635]: E0514 23:53:40.688079 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.688717 kubelet[2635]: E0514 23:53:40.688296 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.688717 kubelet[2635]: W0514 23:53:40.688304 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.688717 kubelet[2635]: E0514 23:53:40.688320 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.688717 kubelet[2635]: E0514 23:53:40.688662 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.688717 kubelet[2635]: W0514 23:53:40.688690 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.688830 kubelet[2635]: E0514 23:53:40.688720 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.689168 kubelet[2635]: E0514 23:53:40.689146 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.689168 kubelet[2635]: W0514 23:53:40.689157 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.689168 kubelet[2635]: E0514 23:53:40.689166 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.689446 kubelet[2635]: E0514 23:53:40.689404 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.689446 kubelet[2635]: W0514 23:53:40.689415 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.689446 kubelet[2635]: E0514 23:53:40.689444 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.689670 kubelet[2635]: E0514 23:53:40.689650 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.689670 kubelet[2635]: W0514 23:53:40.689660 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.689721 kubelet[2635]: E0514 23:53:40.689672 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.689941 kubelet[2635]: E0514 23:53:40.689909 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.689941 kubelet[2635]: W0514 23:53:40.689922 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.689941 kubelet[2635]: E0514 23:53:40.689931 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.690146 kubelet[2635]: E0514 23:53:40.690125 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.690146 kubelet[2635]: W0514 23:53:40.690135 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.690146 kubelet[2635]: E0514 23:53:40.690142 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.690409 kubelet[2635]: E0514 23:53:40.690386 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.690409 kubelet[2635]: W0514 23:53:40.690399 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.690409 kubelet[2635]: E0514 23:53:40.690408 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.690653 kubelet[2635]: E0514 23:53:40.690637 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.690653 kubelet[2635]: W0514 23:53:40.690651 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.690710 kubelet[2635]: E0514 23:53:40.690661 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.690905 kubelet[2635]: E0514 23:53:40.690892 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.690905 kubelet[2635]: W0514 23:53:40.690903 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.690957 kubelet[2635]: E0514 23:53:40.690913 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.691132 kubelet[2635]: E0514 23:53:40.691121 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.691132 kubelet[2635]: W0514 23:53:40.691130 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.691187 kubelet[2635]: E0514 23:53:40.691139 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.691373 kubelet[2635]: E0514 23:53:40.691361 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.691398 kubelet[2635]: W0514 23:53:40.691371 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.691398 kubelet[2635]: E0514 23:53:40.691380 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.691600 kubelet[2635]: E0514 23:53:40.691589 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.691600 kubelet[2635]: W0514 23:53:40.691597 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.691643 kubelet[2635]: E0514 23:53:40.691605 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.691805 kubelet[2635]: E0514 23:53:40.691793 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.691830 kubelet[2635]: W0514 23:53:40.691804 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.691830 kubelet[2635]: E0514 23:53:40.691813 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.692030 kubelet[2635]: E0514 23:53:40.692019 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.692030 kubelet[2635]: W0514 23:53:40.692028 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.692083 kubelet[2635]: E0514 23:53:40.692035 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.692247 kubelet[2635]: E0514 23:53:40.692234 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.692277 kubelet[2635]: W0514 23:53:40.692245 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.692277 kubelet[2635]: E0514 23:53:40.692254 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.692483 kubelet[2635]: E0514 23:53:40.692469 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.692483 kubelet[2635]: W0514 23:53:40.692478 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.692483 kubelet[2635]: E0514 23:53:40.692485 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.692668 kubelet[2635]: E0514 23:53:40.692658 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.692668 kubelet[2635]: W0514 23:53:40.692667 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.692718 kubelet[2635]: E0514 23:53:40.692674 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.692853 kubelet[2635]: E0514 23:53:40.692843 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.692853 kubelet[2635]: W0514 23:53:40.692851 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.693028 kubelet[2635]: E0514 23:53:40.692859 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.693088 kubelet[2635]: E0514 23:53:40.693077 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.693088 kubelet[2635]: W0514 23:53:40.693086 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.693132 kubelet[2635]: E0514 23:53:40.693094 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.693283 kubelet[2635]: E0514 23:53:40.693271 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.693283 kubelet[2635]: W0514 23:53:40.693281 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.693327 kubelet[2635]: E0514 23:53:40.693289 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.693482 kubelet[2635]: E0514 23:53:40.693472 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.693482 kubelet[2635]: W0514 23:53:40.693480 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.693534 kubelet[2635]: E0514 23:53:40.693488 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.693689 kubelet[2635]: E0514 23:53:40.693678 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.693689 kubelet[2635]: W0514 23:53:40.693687 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.693732 kubelet[2635]: E0514 23:53:40.693694 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:40.693883 kubelet[2635]: E0514 23:53:40.693866 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:40.693883 kubelet[2635]: W0514 23:53:40.693881 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:40.693933 kubelet[2635]: E0514 23:53:40.693888 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:41.343229 kubelet[2635]: E0514 23:53:41.343151 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:42.219490 containerd[1476]: time="2025-05-14T23:53:42.219438210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:42.238842 containerd[1476]: time="2025-05-14T23:53:42.238769289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 23:53:42.259907 containerd[1476]: time="2025-05-14T23:53:42.259830680Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:42.269934 containerd[1476]: time="2025-05-14T23:53:42.269890114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:42.270541 containerd[1476]: time="2025-05-14T23:53:42.270494816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.438657828s" May 14 23:53:42.270541 containerd[1476]: time="2025-05-14T23:53:42.270523512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 23:53:42.273593 containerd[1476]: time="2025-05-14T23:53:42.272720016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 23:53:42.286161 containerd[1476]: time="2025-05-14T23:53:42.286117065Z" level=info msg="CreateContainer within sandbox \"7945b3692b1bbacf780b90107cca94be02a47e1216f5471a36a1f0da9d34af71\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 23:53:42.317659 containerd[1476]: time="2025-05-14T23:53:42.317592233Z" level=info msg="CreateContainer within sandbox \"7945b3692b1bbacf780b90107cca94be02a47e1216f5471a36a1f0da9d34af71\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"90719de68afb24400273c12a8a9366a5c3d484525450c2ac45723672df03fe53\"" May 14 23:53:42.318228 containerd[1476]: time="2025-05-14T23:53:42.318188277Z" level=info msg="StartContainer for \"90719de68afb24400273c12a8a9366a5c3d484525450c2ac45723672df03fe53\"" May 14 23:53:42.350634 systemd[1]: Started cri-containerd-90719de68afb24400273c12a8a9366a5c3d484525450c2ac45723672df03fe53.scope - libcontainer container 90719de68afb24400273c12a8a9366a5c3d484525450c2ac45723672df03fe53. May 14 23:53:42.400122 containerd[1476]: time="2025-05-14T23:53:42.399585412Z" level=info msg="StartContainer for \"90719de68afb24400273c12a8a9366a5c3d484525450c2ac45723672df03fe53\" returns successfully" May 14 23:53:43.343121 kubelet[2635]: E0514 23:53:43.343072 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:43.410156 kubelet[2635]: I0514 23:53:43.410065 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f54b9d5cf-vfhmr" podStartSLOduration=1.9689606149999999 podStartE2EDuration="4.409435218s" podCreationTimestamp="2025-05-14 23:53:39 +0000 UTC" firstStartedPulling="2025-05-14 23:53:39.831401463 +0000 UTC m=+11.610631998" lastFinishedPulling="2025-05-14 23:53:42.271876066 +0000 UTC m=+14.051106601" observedRunningTime="2025-05-14 23:53:43.408884663 +0000 UTC m=+15.188115198" watchObservedRunningTime="2025-05-14 23:53:43.409435218 +0000 UTC m=+15.188665753" May 14 23:53:43.412999 kubelet[2635]: E0514 23:53:43.412971 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.413121 kubelet[2635]: W0514 23:53:43.412997 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.413121 kubelet[2635]: E0514 23:53:43.413023 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.413274 kubelet[2635]: E0514 23:53:43.413248 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.413274 kubelet[2635]: W0514 23:53:43.413264 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.413330 kubelet[2635]: E0514 23:53:43.413275 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.413509 kubelet[2635]: E0514 23:53:43.413494 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.413509 kubelet[2635]: W0514 23:53:43.413508 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.413575 kubelet[2635]: E0514 23:53:43.413519 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.413778 kubelet[2635]: E0514 23:53:43.413756 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.413778 kubelet[2635]: W0514 23:53:43.413770 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.413829 kubelet[2635]: E0514 23:53:43.413783 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.414014 kubelet[2635]: E0514 23:53:43.413993 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.414014 kubelet[2635]: W0514 23:53:43.414007 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.414114 kubelet[2635]: E0514 23:53:43.414018 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.414251 kubelet[2635]: E0514 23:53:43.414232 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.414251 kubelet[2635]: W0514 23:53:43.414247 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.414358 kubelet[2635]: E0514 23:53:43.414258 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.414528 kubelet[2635]: E0514 23:53:43.414497 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.414528 kubelet[2635]: W0514 23:53:43.414527 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.414630 kubelet[2635]: E0514 23:53:43.414540 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.414804 kubelet[2635]: E0514 23:53:43.414779 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.414804 kubelet[2635]: W0514 23:53:43.414794 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.415114 kubelet[2635]: E0514 23:53:43.414821 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.415114 kubelet[2635]: E0514 23:53:43.415062 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.415114 kubelet[2635]: W0514 23:53:43.415073 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.415114 kubelet[2635]: E0514 23:53:43.415084 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.415313 kubelet[2635]: E0514 23:53:43.415297 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.415313 kubelet[2635]: W0514 23:53:43.415309 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.415393 kubelet[2635]: E0514 23:53:43.415319 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.415643 kubelet[2635]: E0514 23:53:43.415621 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.415643 kubelet[2635]: W0514 23:53:43.415635 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.415717 kubelet[2635]: E0514 23:53:43.415645 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.415878 kubelet[2635]: E0514 23:53:43.415863 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.415878 kubelet[2635]: W0514 23:53:43.415875 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.415954 kubelet[2635]: E0514 23:53:43.415884 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.416151 kubelet[2635]: E0514 23:53:43.416137 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.416151 kubelet[2635]: W0514 23:53:43.416149 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.416225 kubelet[2635]: E0514 23:53:43.416159 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.416383 kubelet[2635]: E0514 23:53:43.416368 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.416383 kubelet[2635]: W0514 23:53:43.416380 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.416476 kubelet[2635]: E0514 23:53:43.416390 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.416616 kubelet[2635]: E0514 23:53:43.416602 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.416616 kubelet[2635]: W0514 23:53:43.416613 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.416683 kubelet[2635]: E0514 23:53:43.416623 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.416913 kubelet[2635]: E0514 23:53:43.416900 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.416913 kubelet[2635]: W0514 23:53:43.416911 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.417006 kubelet[2635]: E0514 23:53:43.416921 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.417197 kubelet[2635]: E0514 23:53:43.417182 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.417197 kubelet[2635]: W0514 23:53:43.417195 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.417258 kubelet[2635]: E0514 23:53:43.417211 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.417482 kubelet[2635]: E0514 23:53:43.417465 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.417482 kubelet[2635]: W0514 23:53:43.417478 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.417570 kubelet[2635]: E0514 23:53:43.417496 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.417764 kubelet[2635]: E0514 23:53:43.417749 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.417764 kubelet[2635]: W0514 23:53:43.417762 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.417841 kubelet[2635]: E0514 23:53:43.417780 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.418068 kubelet[2635]: E0514 23:53:43.418047 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.418068 kubelet[2635]: W0514 23:53:43.418064 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.418153 kubelet[2635]: E0514 23:53:43.418083 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.418309 kubelet[2635]: E0514 23:53:43.418286 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.418309 kubelet[2635]: W0514 23:53:43.418298 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.418383 kubelet[2635]: E0514 23:53:43.418314 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.418541 kubelet[2635]: E0514 23:53:43.418525 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.418541 kubelet[2635]: W0514 23:53:43.418537 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.418630 kubelet[2635]: E0514 23:53:43.418553 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.418829 kubelet[2635]: E0514 23:53:43.418812 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.418829 kubelet[2635]: W0514 23:53:43.418824 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.418948 kubelet[2635]: E0514 23:53:43.418882 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.419092 kubelet[2635]: E0514 23:53:43.419065 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.419140 kubelet[2635]: W0514 23:53:43.419091 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.419140 kubelet[2635]: E0514 23:53:43.419127 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.419343 kubelet[2635]: E0514 23:53:43.419314 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.419343 kubelet[2635]: W0514 23:53:43.419328 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.419454 kubelet[2635]: E0514 23:53:43.419359 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.419678 kubelet[2635]: E0514 23:53:43.419656 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.419721 kubelet[2635]: W0514 23:53:43.419678 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.419721 kubelet[2635]: E0514 23:53:43.419706 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.420054 kubelet[2635]: E0514 23:53:43.420026 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.420054 kubelet[2635]: W0514 23:53:43.420053 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.420122 kubelet[2635]: E0514 23:53:43.420075 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.420391 kubelet[2635]: E0514 23:53:43.420376 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.420391 kubelet[2635]: W0514 23:53:43.420390 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.420492 kubelet[2635]: E0514 23:53:43.420408 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.420666 kubelet[2635]: E0514 23:53:43.420649 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.420666 kubelet[2635]: W0514 23:53:43.420661 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.420731 kubelet[2635]: E0514 23:53:43.420676 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.420927 kubelet[2635]: E0514 23:53:43.420910 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.420927 kubelet[2635]: W0514 23:53:43.420924 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.421016 kubelet[2635]: E0514 23:53:43.420943 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.421224 kubelet[2635]: E0514 23:53:43.421206 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.421224 kubelet[2635]: W0514 23:53:43.421219 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.421295 kubelet[2635]: E0514 23:53:43.421231 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.421552 kubelet[2635]: E0514 23:53:43.421536 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.421552 kubelet[2635]: W0514 23:53:43.421548 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.421635 kubelet[2635]: E0514 23:53:43.421560 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.422054 kubelet[2635]: E0514 23:53:43.422018 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:53:43.422054 kubelet[2635]: W0514 23:53:43.422047 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:53:43.422133 kubelet[2635]: E0514 23:53:43.422059 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:53:43.745407 containerd[1476]: time="2025-05-14T23:53:43.745299677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:43.747724 containerd[1476]: time="2025-05-14T23:53:43.747670086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 23:53:43.750865 containerd[1476]: time="2025-05-14T23:53:43.750819346Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:43.756670 containerd[1476]: time="2025-05-14T23:53:43.756593140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:43.757852 containerd[1476]: time="2025-05-14T23:53:43.757706314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.484934667s" May 14 23:53:43.757852 containerd[1476]: time="2025-05-14T23:53:43.757767734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 23:53:43.760383 containerd[1476]: time="2025-05-14T23:53:43.760350658Z" level=info msg="CreateContainer within sandbox \"b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 23:53:43.783132 containerd[1476]: time="2025-05-14T23:53:43.783077426Z" level=info msg="CreateContainer within sandbox \"b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06\"" May 14 23:53:43.783876 containerd[1476]: time="2025-05-14T23:53:43.783831278Z" level=info msg="StartContainer for \"4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06\"" May 14 23:53:43.829768 systemd[1]: Started cri-containerd-4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06.scope - libcontainer container 4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06. May 14 23:53:43.890853 systemd[1]: cri-containerd-4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06.scope: Deactivated successfully. May 14 23:53:43.891301 systemd[1]: cri-containerd-4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06.scope: Consumed 43ms CPU time, 8.2M memory peak, 4.7M written to disk. May 14 23:53:44.302383 containerd[1476]: time="2025-05-14T23:53:44.302303600Z" level=info msg="StartContainer for \"4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06\" returns successfully" May 14 23:53:44.326597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06-rootfs.mount: Deactivated successfully. May 14 23:53:44.387847 containerd[1476]: time="2025-05-14T23:53:44.387763693Z" level=info msg="shim disconnected" id=4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06 namespace=k8s.io May 14 23:53:44.387847 containerd[1476]: time="2025-05-14T23:53:44.387844821Z" level=warning msg="cleaning up after shim disconnected" id=4fe0625a2d26fc3d03910240ba8612c0954f68dd9e0c6cd7d35f80e48f6b9a06 namespace=k8s.io May 14 23:53:44.387847 containerd[1476]: time="2025-05-14T23:53:44.387857796Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:44.401494 kubelet[2635]: I0514 23:53:44.401411 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:53:45.344090 kubelet[2635]: E0514 23:53:45.343990 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:45.404612 containerd[1476]: time="2025-05-14T23:53:45.404520880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 23:53:47.343269 kubelet[2635]: E0514 23:53:47.343180 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:49.343234 kubelet[2635]: E0514 23:53:49.343118 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:51.343371 kubelet[2635]: E0514 23:53:51.343306 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:52.194257 containerd[1476]: time="2025-05-14T23:53:52.193942245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:52.207902 containerd[1476]: time="2025-05-14T23:53:52.207803346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 23:53:52.221886 containerd[1476]: time="2025-05-14T23:53:52.221803707Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:52.226911 containerd[1476]: time="2025-05-14T23:53:52.226858325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:52.227734 containerd[1476]: time="2025-05-14T23:53:52.227684945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.823109079s" May 14 23:53:52.227734 containerd[1476]: time="2025-05-14T23:53:52.227723090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 23:53:52.230813 containerd[1476]: time="2025-05-14T23:53:52.230747095Z" level=info msg="CreateContainer within sandbox \"b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 23:53:52.267377 containerd[1476]: time="2025-05-14T23:53:52.267303595Z" level=info msg="CreateContainer within sandbox \"b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e\"" May 14 23:53:52.268142 containerd[1476]: time="2025-05-14T23:53:52.268037265Z" level=info msg="StartContainer for \"e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e\"" May 14 23:53:52.306594 systemd[1]: Started cri-containerd-e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e.scope - libcontainer container e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e. May 14 23:53:52.601238 containerd[1476]: time="2025-05-14T23:53:52.601040557Z" level=info msg="StartContainer for \"e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e\" returns successfully" May 14 23:53:53.343394 kubelet[2635]: E0514 23:53:53.343322 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:53.869246 containerd[1476]: time="2025-05-14T23:53:53.869194390Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:53:53.872411 systemd[1]: cri-containerd-e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e.scope: Deactivated successfully. May 14 23:53:53.872871 systemd[1]: cri-containerd-e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e.scope: Consumed 647ms CPU time, 161.4M memory peak, 4K read from disk, 154M written to disk. May 14 23:53:53.893497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e-rootfs.mount: Deactivated successfully. May 14 23:53:53.908908 kubelet[2635]: I0514 23:53:53.908876 2635 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 23:53:54.350292 systemd[1]: Created slice kubepods-burstable-podccaedbdf_74a7_4eb4_b5a0_f8e0530aad2b.slice - libcontainer container kubepods-burstable-podccaedbdf_74a7_4eb4_b5a0_f8e0530aad2b.slice. May 14 23:53:54.391788 kubelet[2635]: I0514 23:53:54.391742 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fld8v\" (UniqueName: \"kubernetes.io/projected/ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b-kube-api-access-fld8v\") pod \"coredns-668d6bf9bc-k6rt6\" (UID: \"ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b\") " pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:54.391788 kubelet[2635]: I0514 23:53:54.391793 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b-config-volume\") pod \"coredns-668d6bf9bc-k6rt6\" (UID: \"ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b\") " pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:54.528164 containerd[1476]: time="2025-05-14T23:53:54.528022948Z" level=info msg="shim disconnected" id=e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e namespace=k8s.io May 14 23:53:54.528407 containerd[1476]: time="2025-05-14T23:53:54.528174822Z" level=warning msg="cleaning up after shim disconnected" id=e2abede0e96d180cfb3d183dffbed15cc85041b97987facef9a51ab37218c70e namespace=k8s.io May 14 23:53:54.528407 containerd[1476]: time="2025-05-14T23:53:54.528190633Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:54.545789 systemd[1]: Created slice kubepods-burstable-pod78747a92_dcde_4a68_97b9_39a31a2ff2f2.slice - libcontainer container kubepods-burstable-pod78747a92_dcde_4a68_97b9_39a31a2ff2f2.slice. May 14 23:53:54.556583 systemd[1]: Created slice kubepods-besteffort-pod274c1c1a_50ff_4e53_bdf7_547b26e013ec.slice - libcontainer container kubepods-besteffort-pod274c1c1a_50ff_4e53_bdf7_547b26e013ec.slice. May 14 23:53:54.562844 systemd[1]: Created slice kubepods-besteffort-pod0bd041e0_42d3_43db_a483_12474ebbedc9.slice - libcontainer container kubepods-besteffort-pod0bd041e0_42d3_43db_a483_12474ebbedc9.slice. May 14 23:53:54.568292 systemd[1]: Created slice kubepods-besteffort-pod9046d3b9_bfcc_40d1_a2ed_7f3e2193399a.slice - libcontainer container kubepods-besteffort-pod9046d3b9_bfcc_40d1_a2ed_7f3e2193399a.slice. May 14 23:53:54.593362 kubelet[2635]: I0514 23:53:54.593304 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/274c1c1a-50ff-4e53-bdf7-547b26e013ec-calico-apiserver-certs\") pod \"calico-apiserver-564c88fc57-zsf99\" (UID: \"274c1c1a-50ff-4e53-bdf7-547b26e013ec\") " pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:53:54.593362 kubelet[2635]: I0514 23:53:54.593364 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9046d3b9-bfcc-40d1-a2ed-7f3e2193399a-calico-apiserver-certs\") pod \"calico-apiserver-564c88fc57-7zxh5\" (UID: \"9046d3b9-bfcc-40d1-a2ed-7f3e2193399a\") " pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:53:54.593362 kubelet[2635]: I0514 23:53:54.593387 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvfks\" (UniqueName: \"kubernetes.io/projected/9046d3b9-bfcc-40d1-a2ed-7f3e2193399a-kube-api-access-dvfks\") pod \"calico-apiserver-564c88fc57-7zxh5\" (UID: \"9046d3b9-bfcc-40d1-a2ed-7f3e2193399a\") " pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:53:54.593674 kubelet[2635]: I0514 23:53:54.593412 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkhh\" (UniqueName: \"kubernetes.io/projected/0bd041e0-42d3-43db-a483-12474ebbedc9-kube-api-access-2qkhh\") pod \"calico-kube-controllers-6997bdb66f-xr6kr\" (UID: \"0bd041e0-42d3-43db-a483-12474ebbedc9\") " pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:53:54.593674 kubelet[2635]: I0514 23:53:54.593464 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78747a92-dcde-4a68-97b9-39a31a2ff2f2-config-volume\") pod \"coredns-668d6bf9bc-mcmz8\" (UID: \"78747a92-dcde-4a68-97b9-39a31a2ff2f2\") " pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:53:54.593674 kubelet[2635]: I0514 23:53:54.593498 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhlrv\" (UniqueName: \"kubernetes.io/projected/78747a92-dcde-4a68-97b9-39a31a2ff2f2-kube-api-access-zhlrv\") pod \"coredns-668d6bf9bc-mcmz8\" (UID: \"78747a92-dcde-4a68-97b9-39a31a2ff2f2\") " pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:53:54.593674 kubelet[2635]: I0514 23:53:54.593519 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6t5m\" (UniqueName: \"kubernetes.io/projected/274c1c1a-50ff-4e53-bdf7-547b26e013ec-kube-api-access-n6t5m\") pod \"calico-apiserver-564c88fc57-zsf99\" (UID: \"274c1c1a-50ff-4e53-bdf7-547b26e013ec\") " pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:53:54.593674 kubelet[2635]: I0514 23:53:54.593591 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd041e0-42d3-43db-a483-12474ebbedc9-tigera-ca-bundle\") pod \"calico-kube-controllers-6997bdb66f-xr6kr\" (UID: \"0bd041e0-42d3-43db-a483-12474ebbedc9\") " pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:53:54.613954 containerd[1476]: time="2025-05-14T23:53:54.613819730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 23:53:54.654235 containerd[1476]: time="2025-05-14T23:53:54.654181325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:0,}" May 14 23:53:54.859672 containerd[1476]: time="2025-05-14T23:53:54.859332378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:0,}" May 14 23:53:54.860529 containerd[1476]: time="2025-05-14T23:53:54.860332842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:0,}" May 14 23:53:54.903102 containerd[1476]: time="2025-05-14T23:53:54.876766666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:0,}" May 14 23:53:54.903102 containerd[1476]: time="2025-05-14T23:53:54.892848809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:0,}" May 14 23:53:55.361008 systemd[1]: Created slice kubepods-besteffort-podc3b238b4_7acc_401a_8dae_17e6c81aeb42.slice - libcontainer container kubepods-besteffort-podc3b238b4_7acc_401a_8dae_17e6c81aeb42.slice. May 14 23:53:55.366379 containerd[1476]: time="2025-05-14T23:53:55.366023669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:0,}" May 14 23:53:55.619505 containerd[1476]: time="2025-05-14T23:53:55.618959285Z" level=error msg="Failed to destroy network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.626121 containerd[1476]: time="2025-05-14T23:53:55.625370882Z" level=error msg="encountered an error cleaning up failed sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.626121 containerd[1476]: time="2025-05-14T23:53:55.625487698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.626298 kubelet[2635]: E0514 23:53:55.625747 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.626298 kubelet[2635]: E0514 23:53:55.625838 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:55.626298 kubelet[2635]: E0514 23:53:55.625865 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:55.635628 kubelet[2635]: E0514 23:53:55.625940 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k6rt6" podUID="ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b" May 14 23:53:55.658465 kubelet[2635]: I0514 23:53:55.658069 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb" May 14 23:53:55.663765 containerd[1476]: time="2025-05-14T23:53:55.663707868Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:53:55.664036 containerd[1476]: time="2025-05-14T23:53:55.664003880Z" level=info msg="Ensure that sandbox 33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb in task-service has been cleanup successfully" May 14 23:53:55.666845 containerd[1476]: time="2025-05-14T23:53:55.666788961Z" level=info msg="TearDown network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" successfully" May 14 23:53:55.666845 containerd[1476]: time="2025-05-14T23:53:55.666833947Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" returns successfully" May 14 23:53:55.674452 containerd[1476]: time="2025-05-14T23:53:55.674379576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:1,}" May 14 23:53:55.740520 containerd[1476]: time="2025-05-14T23:53:55.740446846Z" level=error msg="Failed to destroy network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.741204 containerd[1476]: time="2025-05-14T23:53:55.741178099Z" level=error msg="encountered an error cleaning up failed sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.741351 containerd[1476]: time="2025-05-14T23:53:55.741325264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.753725 kubelet[2635]: E0514 23:53:55.743062 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.753725 kubelet[2635]: E0514 23:53:55.743230 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:53:55.753725 kubelet[2635]: E0514 23:53:55.743305 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:53:55.753960 kubelet[2635]: E0514 23:53:55.743409 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" podUID="274c1c1a-50ff-4e53-bdf7-547b26e013ec" May 14 23:53:55.754240 containerd[1476]: time="2025-05-14T23:53:55.754201791Z" level=error msg="Failed to destroy network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.764866 containerd[1476]: time="2025-05-14T23:53:55.764804414Z" level=error msg="encountered an error cleaning up failed sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.765907 containerd[1476]: time="2025-05-14T23:53:55.765844284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.766241 kubelet[2635]: E0514 23:53:55.766177 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.766325 kubelet[2635]: E0514 23:53:55.766260 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:53:55.766325 kubelet[2635]: E0514 23:53:55.766291 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:53:55.766405 kubelet[2635]: E0514 23:53:55.766347 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" podUID="0bd041e0-42d3-43db-a483-12474ebbedc9" May 14 23:53:55.794693 containerd[1476]: time="2025-05-14T23:53:55.794632663Z" level=error msg="Failed to destroy network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.795797 containerd[1476]: time="2025-05-14T23:53:55.795597887Z" level=error msg="Failed to destroy network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.796982 containerd[1476]: time="2025-05-14T23:53:55.796835568Z" level=error msg="Failed to destroy network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.797550 containerd[1476]: time="2025-05-14T23:53:55.797505603Z" level=error msg="encountered an error cleaning up failed sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.797736 containerd[1476]: time="2025-05-14T23:53:55.797590597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.797939 kubelet[2635]: E0514 23:53:55.797885 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.798031 kubelet[2635]: E0514 23:53:55.797975 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:55.798031 kubelet[2635]: E0514 23:53:55.798005 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:55.798104 kubelet[2635]: E0514 23:53:55.798056 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:55.803725 containerd[1476]: time="2025-05-14T23:53:55.802896908Z" level=error msg="encountered an error cleaning up failed sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.803725 containerd[1476]: time="2025-05-14T23:53:55.802963607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.803725 containerd[1476]: time="2025-05-14T23:53:55.803619154Z" level=error msg="encountered an error cleaning up failed sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.803725 containerd[1476]: time="2025-05-14T23:53:55.803672777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.804132 kubelet[2635]: E0514 23:53:55.803118 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.804132 kubelet[2635]: E0514 23:53:55.803159 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:53:55.804132 kubelet[2635]: E0514 23:53:55.803179 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:53:55.804266 kubelet[2635]: E0514 23:53:55.803215 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" podUID="9046d3b9-bfcc-40d1-a2ed-7f3e2193399a" May 14 23:53:55.805759 kubelet[2635]: E0514 23:53:55.804526 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:55.805759 kubelet[2635]: E0514 23:53:55.804563 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:53:55.805759 kubelet[2635]: E0514 23:53:55.804640 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:53:55.805901 kubelet[2635]: E0514 23:53:55.804677 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mcmz8_kube-system(78747a92-dcde-4a68-97b9-39a31a2ff2f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mcmz8_kube-system(78747a92-dcde-4a68-97b9-39a31a2ff2f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mcmz8" podUID="78747a92-dcde-4a68-97b9-39a31a2ff2f2" May 14 23:53:56.008805 containerd[1476]: time="2025-05-14T23:53:56.008735514Z" level=error msg="Failed to destroy network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:56.009759 containerd[1476]: time="2025-05-14T23:53:56.009180654Z" level=error msg="encountered an error cleaning up failed sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:56.009759 containerd[1476]: time="2025-05-14T23:53:56.009263162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:56.009837 kubelet[2635]: E0514 23:53:56.009570 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:56.009837 kubelet[2635]: E0514 23:53:56.009656 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:56.009837 kubelet[2635]: E0514 23:53:56.009683 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:56.009946 kubelet[2635]: E0514 23:53:56.009735 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k6rt6" podUID="ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b" May 14 23:53:56.232866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722-shm.mount: Deactivated successfully. May 14 23:53:56.232999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323-shm.mount: Deactivated successfully. May 14 23:53:56.233090 systemd[1]: run-netns-cni\x2db7ce95be\x2dc9a1\x2d3998\x2db1d0\x2d23c7b2e514e3.mount: Deactivated successfully. May 14 23:53:56.233174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb-shm.mount: Deactivated successfully. May 14 23:53:56.390306 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:45500.service - OpenSSH per-connection server daemon (10.0.0.1:45500). May 14 23:53:56.495549 sshd[3653]: Accepted publickey for core from 10.0.0.1 port 45500 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:53:56.491275 sshd-session[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:56.510479 systemd-logind[1460]: New session 8 of user core. May 14 23:53:56.522999 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:53:56.655031 kubelet[2635]: I0514 23:53:56.654903 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49" May 14 23:53:56.656688 containerd[1476]: time="2025-05-14T23:53:56.656658943Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" May 14 23:53:56.662854 containerd[1476]: time="2025-05-14T23:53:56.661866527Z" level=info msg="Ensure that sandbox 64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49 in task-service has been cleanup successfully" May 14 23:53:56.665360 kubelet[2635]: I0514 23:53:56.664577 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec" May 14 23:53:56.666028 containerd[1476]: time="2025-05-14T23:53:56.666001079Z" level=info msg="TearDown network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" successfully" May 14 23:53:56.666370 containerd[1476]: time="2025-05-14T23:53:56.666209652Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" returns successfully" May 14 23:53:56.667192 systemd[1]: run-netns-cni\x2df149e2dd\x2ddf90\x2d9e0d\x2dd9da\x2d13c4ed562943.mount: Deactivated successfully. May 14 23:53:56.669154 containerd[1476]: time="2025-05-14T23:53:56.668240382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:1,}" May 14 23:53:56.675660 containerd[1476]: time="2025-05-14T23:53:56.673268479Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" May 14 23:53:56.675660 containerd[1476]: time="2025-05-14T23:53:56.674105455Z" level=info msg="Ensure that sandbox c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec in task-service has been cleanup successfully" May 14 23:53:56.686859 kubelet[2635]: I0514 23:53:56.686372 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5" May 14 23:53:56.690563 containerd[1476]: time="2025-05-14T23:53:56.687945234Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" May 14 23:53:56.690563 containerd[1476]: time="2025-05-14T23:53:56.688187773Z" level=info msg="Ensure that sandbox 342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5 in task-service has been cleanup successfully" May 14 23:53:56.690563 containerd[1476]: time="2025-05-14T23:53:56.689707326Z" level=info msg="TearDown network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" successfully" May 14 23:53:56.690563 containerd[1476]: time="2025-05-14T23:53:56.689733187Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" returns successfully" May 14 23:53:56.692253 containerd[1476]: time="2025-05-14T23:53:56.691156574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:1,}" May 14 23:53:56.696124 systemd[1]: run-netns-cni\x2d9855d5df\x2d4783\x2d7449\x2d217d\x2d7a421f380350.mount: Deactivated successfully. May 14 23:53:56.697513 kubelet[2635]: I0514 23:53:56.696875 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9" May 14 23:53:56.699672 containerd[1476]: time="2025-05-14T23:53:56.699325595Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" May 14 23:53:56.699672 containerd[1476]: time="2025-05-14T23:53:56.705576352Z" level=info msg="Ensure that sandbox f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9 in task-service has been cleanup successfully" May 14 23:53:56.706316 systemd[1]: run-netns-cni\x2d37f475e1\x2d3e79\x2d63e6\x2d1af1\x2dc15c9862a853.mount: Deactivated successfully. May 14 23:53:56.716244 containerd[1476]: time="2025-05-14T23:53:56.712842471Z" level=info msg="TearDown network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" successfully" May 14 23:53:56.716244 containerd[1476]: time="2025-05-14T23:53:56.712873781Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" returns successfully" May 14 23:53:56.728764 containerd[1476]: time="2025-05-14T23:53:56.728700496Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:53:56.728913 containerd[1476]: time="2025-05-14T23:53:56.728847590Z" level=info msg="TearDown network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" successfully" May 14 23:53:56.728913 containerd[1476]: time="2025-05-14T23:53:56.728861417Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" returns successfully" May 14 23:53:56.733644 systemd[1]: run-netns-cni\x2d4a8cdbea\x2d480a\x2d2e32\x2dac89\x2db4dc4a93b43f.mount: Deactivated successfully. May 14 23:53:56.742726 containerd[1476]: time="2025-05-14T23:53:56.742536998Z" level=info msg="TearDown network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" successfully" May 14 23:53:56.742726 containerd[1476]: time="2025-05-14T23:53:56.742579851Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" returns successfully" May 14 23:53:56.758665 containerd[1476]: time="2025-05-14T23:53:56.758580451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:1,}" May 14 23:53:56.760089 kubelet[2635]: I0514 23:53:56.760049 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722" May 14 23:53:56.762710 containerd[1476]: time="2025-05-14T23:53:56.762596705Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" May 14 23:53:56.762783 containerd[1476]: time="2025-05-14T23:53:56.762745943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:2,}" May 14 23:53:56.763444 containerd[1476]: time="2025-05-14T23:53:56.762869923Z" level=info msg="Ensure that sandbox 7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722 in task-service has been cleanup successfully" May 14 23:53:56.765531 containerd[1476]: time="2025-05-14T23:53:56.763646532Z" level=info msg="TearDown network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" successfully" May 14 23:53:56.765531 containerd[1476]: time="2025-05-14T23:53:56.763670498Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" returns successfully" May 14 23:53:56.768293 containerd[1476]: time="2025-05-14T23:53:56.768241162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:1,}" May 14 23:53:56.769038 kubelet[2635]: I0514 23:53:56.768839 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323" May 14 23:53:56.769675 containerd[1476]: time="2025-05-14T23:53:56.769648900Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\"" May 14 23:53:56.770966 containerd[1476]: time="2025-05-14T23:53:56.770133976Z" level=info msg="Ensure that sandbox a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323 in task-service has been cleanup successfully" May 14 23:53:56.771439 containerd[1476]: time="2025-05-14T23:53:56.771315187Z" level=info msg="TearDown network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" successfully" May 14 23:53:56.771439 containerd[1476]: time="2025-05-14T23:53:56.771332350Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" returns successfully" May 14 23:53:56.772949 containerd[1476]: time="2025-05-14T23:53:56.771984940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:1,}" May 14 23:53:57.021722 sshd[3655]: Connection closed by 10.0.0.1 port 45500 May 14 23:53:57.022728 sshd-session[3653]: pam_unix(sshd:session): session closed for user core May 14 23:53:57.027461 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:45500.service: Deactivated successfully. May 14 23:53:57.029889 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:53:57.031858 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. May 14 23:53:57.033052 systemd-logind[1460]: Removed session 8. May 14 23:53:57.243122 systemd[1]: run-netns-cni\x2d37bf9b9c\x2dea0e\x2dc140\x2dbca8\x2deafb483aa9cf.mount: Deactivated successfully. May 14 23:53:57.243392 systemd[1]: run-netns-cni\x2d86d42fb3\x2d5fa8\x2d6bf5\x2d738f\x2dc9baabf1c2c8.mount: Deactivated successfully. May 14 23:53:59.393014 containerd[1476]: time="2025-05-14T23:53:59.392964019Z" level=error msg="Failed to destroy network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.394257 containerd[1476]: time="2025-05-14T23:53:59.394015395Z" level=error msg="encountered an error cleaning up failed sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.394257 containerd[1476]: time="2025-05-14T23:53:59.394078546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.395037 kubelet[2635]: E0514 23:53:59.394494 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.395037 kubelet[2635]: E0514 23:53:59.394590 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:59.395037 kubelet[2635]: E0514 23:53:59.394619 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:53:59.395531 kubelet[2635]: E0514 23:53:59.394700 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:53:59.431820 containerd[1476]: time="2025-05-14T23:53:59.431237298Z" level=error msg="Failed to destroy network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.431820 containerd[1476]: time="2025-05-14T23:53:59.431716892Z" level=error msg="encountered an error cleaning up failed sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.431820 containerd[1476]: time="2025-05-14T23:53:59.431777238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.432139 kubelet[2635]: E0514 23:53:59.432029 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.432139 kubelet[2635]: E0514 23:53:59.432108 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:53:59.432232 kubelet[2635]: E0514 23:53:59.432140 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:53:59.432232 kubelet[2635]: E0514 23:53:59.432195 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" podUID="9046d3b9-bfcc-40d1-a2ed-7f3e2193399a" May 14 23:53:59.434771 containerd[1476]: time="2025-05-14T23:53:59.434540763Z" level=error msg="Failed to destroy network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.435762 containerd[1476]: time="2025-05-14T23:53:59.435727368Z" level=error msg="encountered an error cleaning up failed sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.435819 containerd[1476]: time="2025-05-14T23:53:59.435792935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.436433 kubelet[2635]: E0514 23:53:59.436177 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.436433 kubelet[2635]: E0514 23:53:59.436272 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:53:59.436433 kubelet[2635]: E0514 23:53:59.436302 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:53:59.436586 kubelet[2635]: E0514 23:53:59.436361 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mcmz8_kube-system(78747a92-dcde-4a68-97b9-39a31a2ff2f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mcmz8_kube-system(78747a92-dcde-4a68-97b9-39a31a2ff2f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mcmz8" podUID="78747a92-dcde-4a68-97b9-39a31a2ff2f2" May 14 23:53:59.438264 containerd[1476]: time="2025-05-14T23:53:59.438221303Z" level=error msg="Failed to destroy network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.440181 containerd[1476]: time="2025-05-14T23:53:59.438740834Z" level=error msg="encountered an error cleaning up failed sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.440181 containerd[1476]: time="2025-05-14T23:53:59.438825507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.440349 kubelet[2635]: E0514 23:53:59.439051 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.440349 kubelet[2635]: E0514 23:53:59.439584 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:59.440349 kubelet[2635]: E0514 23:53:59.439609 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:53:59.440628 kubelet[2635]: E0514 23:53:59.439679 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k6rt6" podUID="ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b" May 14 23:53:59.453388 containerd[1476]: time="2025-05-14T23:53:59.453317489Z" level=error msg="Failed to destroy network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.453840 containerd[1476]: time="2025-05-14T23:53:59.453810108Z" level=error msg="encountered an error cleaning up failed sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.453911 containerd[1476]: time="2025-05-14T23:53:59.453883038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.454270 kubelet[2635]: E0514 23:53:59.454217 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.454354 kubelet[2635]: E0514 23:53:59.454312 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:53:59.454354 kubelet[2635]: E0514 23:53:59.454336 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:53:59.454451 kubelet[2635]: E0514 23:53:59.454386 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" podUID="274c1c1a-50ff-4e53-bdf7-547b26e013ec" May 14 23:53:59.461736 containerd[1476]: time="2025-05-14T23:53:59.461669219Z" level=error msg="Failed to destroy network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.462202 containerd[1476]: time="2025-05-14T23:53:59.462175635Z" level=error msg="encountered an error cleaning up failed sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.462273 containerd[1476]: time="2025-05-14T23:53:59.462248466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.462573 kubelet[2635]: E0514 23:53:59.462522 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:53:59.462621 kubelet[2635]: E0514 23:53:59.462603 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:53:59.462655 kubelet[2635]: E0514 23:53:59.462631 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:53:59.462739 kubelet[2635]: E0514 23:53:59.462702 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" podUID="0bd041e0-42d3-43db-a483-12474ebbedc9" May 14 23:53:59.778836 kubelet[2635]: I0514 23:53:59.778699 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773" May 14 23:53:59.779883 containerd[1476]: time="2025-05-14T23:53:59.779447146Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\"" May 14 23:53:59.779883 containerd[1476]: time="2025-05-14T23:53:59.779718539Z" level=info msg="Ensure that sandbox 268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773 in task-service has been cleanup successfully" May 14 23:53:59.780074 containerd[1476]: time="2025-05-14T23:53:59.780052984Z" level=info msg="TearDown network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" successfully" May 14 23:53:59.780165 containerd[1476]: time="2025-05-14T23:53:59.780147085Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" returns successfully" May 14 23:53:59.780850 containerd[1476]: time="2025-05-14T23:53:59.780773763Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" May 14 23:53:59.780933 containerd[1476]: time="2025-05-14T23:53:59.780910186Z" level=info msg="TearDown network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" successfully" May 14 23:53:59.780933 containerd[1476]: time="2025-05-14T23:53:59.780926055Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" returns successfully" May 14 23:53:59.781235 kubelet[2635]: I0514 23:53:59.781207 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc" May 14 23:53:59.781829 containerd[1476]: time="2025-05-14T23:53:59.781797214Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\"" May 14 23:53:59.782018 containerd[1476]: time="2025-05-14T23:53:59.781996047Z" level=info msg="Ensure that sandbox 5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc in task-service has been cleanup successfully" May 14 23:53:59.782280 containerd[1476]: time="2025-05-14T23:53:59.782215780Z" level=info msg="TearDown network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" successfully" May 14 23:53:59.782280 containerd[1476]: time="2025-05-14T23:53:59.782236430Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" returns successfully" May 14 23:53:59.783244 containerd[1476]: time="2025-05-14T23:53:59.783218282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:2,}" May 14 23:53:59.783669 containerd[1476]: time="2025-05-14T23:53:59.783576623Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" May 14 23:53:59.783669 containerd[1476]: time="2025-05-14T23:53:59.783666545Z" level=info msg="TearDown network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" successfully" May 14 23:53:59.783772 containerd[1476]: time="2025-05-14T23:53:59.783676695Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" returns successfully" May 14 23:53:59.784458 containerd[1476]: time="2025-05-14T23:53:59.784409637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:2,}" May 14 23:53:59.785453 kubelet[2635]: I0514 23:53:59.785266 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55" May 14 23:53:59.786448 containerd[1476]: time="2025-05-14T23:53:59.786394773Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\"" May 14 23:53:59.786614 kubelet[2635]: I0514 23:53:59.786593 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea" May 14 23:53:59.786694 containerd[1476]: time="2025-05-14T23:53:59.786628041Z" level=info msg="Ensure that sandbox 429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55 in task-service has been cleanup successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.786845440Z" level=info msg="TearDown network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.786889005Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" returns successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787036248Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\"" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787302812Z" level=info msg="Ensure that sandbox 3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea in task-service has been cleanup successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787401001Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787512476Z" level=info msg="TearDown network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787529227Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" returns successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787797815Z" level=info msg="TearDown network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787819447Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" returns successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787806332Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787947534Z" level=info msg="TearDown network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.787961520Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" returns successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.788081552Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.788156796Z" level=info msg="TearDown network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" successfully" May 14 23:53:59.788157 containerd[1476]: time="2025-05-14T23:53:59.788166075Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" returns successfully" May 14 23:53:59.789454 containerd[1476]: time="2025-05-14T23:53:59.788617675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:2,}" May 14 23:53:59.789454 containerd[1476]: time="2025-05-14T23:53:59.789236848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:3,}" May 14 23:53:59.804582 kubelet[2635]: I0514 23:53:59.804527 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11" May 14 23:53:59.806329 containerd[1476]: time="2025-05-14T23:53:59.806264995Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\"" May 14 23:53:59.806562 containerd[1476]: time="2025-05-14T23:53:59.806513293Z" level=info msg="Ensure that sandbox 3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11 in task-service has been cleanup successfully" May 14 23:53:59.807234 containerd[1476]: time="2025-05-14T23:53:59.807196920Z" level=info msg="TearDown network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" successfully" May 14 23:53:59.807291 containerd[1476]: time="2025-05-14T23:53:59.807234353Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" returns successfully" May 14 23:53:59.807796 kubelet[2635]: I0514 23:53:59.807756 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450" May 14 23:53:59.808065 containerd[1476]: time="2025-05-14T23:53:59.808018353Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" May 14 23:53:59.808180 containerd[1476]: time="2025-05-14T23:53:59.808159735Z" level=info msg="TearDown network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" successfully" May 14 23:53:59.808180 containerd[1476]: time="2025-05-14T23:53:59.808173883Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" returns successfully" May 14 23:53:59.808375 containerd[1476]: time="2025-05-14T23:53:59.808302110Z" level=info msg="StopPodSandbox for \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\"" May 14 23:53:59.808685 containerd[1476]: time="2025-05-14T23:53:59.808657425Z" level=info msg="Ensure that sandbox e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450 in task-service has been cleanup successfully" May 14 23:53:59.809162 containerd[1476]: time="2025-05-14T23:53:59.809119444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:2,}" May 14 23:53:59.809226 containerd[1476]: time="2025-05-14T23:53:59.809160854Z" level=info msg="TearDown network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" successfully" May 14 23:53:59.809226 containerd[1476]: time="2025-05-14T23:53:59.809173970Z" level=info msg="StopPodSandbox for \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" returns successfully" May 14 23:53:59.809755 containerd[1476]: time="2025-05-14T23:53:59.809673943Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\"" May 14 23:53:59.809855 containerd[1476]: time="2025-05-14T23:53:59.809825535Z" level=info msg="TearDown network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" successfully" May 14 23:53:59.809855 containerd[1476]: time="2025-05-14T23:53:59.809842648Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" returns successfully" May 14 23:53:59.810224 containerd[1476]: time="2025-05-14T23:53:59.810195858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:2,}" May 14 23:54:00.175170 systemd[1]: run-netns-cni\x2df76d35c8\x2d1efb\x2dd92d\x2d9d05\x2d10d5ba375821.mount: Deactivated successfully. May 14 23:54:00.175314 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc-shm.mount: Deactivated successfully. May 14 23:54:00.175441 systemd[1]: run-netns-cni\x2db478ca5e\x2d6716\x2dc78a\x2dad8b\x2d0ddb1bd8c6b2.mount: Deactivated successfully. May 14 23:54:00.175539 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773-shm.mount: Deactivated successfully. May 14 23:54:02.045890 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:45508.service - OpenSSH per-connection server daemon (10.0.0.1:45508). May 14 23:54:02.128269 sshd[3908]: Accepted publickey for core from 10.0.0.1 port 45508 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:02.130382 sshd-session[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:02.136257 systemd-logind[1460]: New session 9 of user core. May 14 23:54:02.142720 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:54:02.226442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004899812.mount: Deactivated successfully. May 14 23:54:02.556051 sshd[3910]: Connection closed by 10.0.0.1 port 45508 May 14 23:54:02.556474 sshd-session[3908]: pam_unix(sshd:session): session closed for user core May 14 23:54:02.561882 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:45508.service: Deactivated successfully. May 14 23:54:02.564748 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:54:02.565648 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. May 14 23:54:02.566883 systemd-logind[1460]: Removed session 9. May 14 23:54:02.751097 containerd[1476]: time="2025-05-14T23:54:02.750946194Z" level=error msg="Failed to destroy network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.752136 containerd[1476]: time="2025-05-14T23:54:02.751944113Z" level=error msg="encountered an error cleaning up failed sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.752136 containerd[1476]: time="2025-05-14T23:54:02.752013707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.752325 kubelet[2635]: E0514 23:54:02.752279 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.752731 kubelet[2635]: E0514 23:54:02.752356 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:54:02.752731 kubelet[2635]: E0514 23:54:02.752383 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:54:02.752804 containerd[1476]: time="2025-05-14T23:54:02.752661773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:02.753675 kubelet[2635]: E0514 23:54:02.753598 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k6rt6" podUID="ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b" May 14 23:54:02.755224 containerd[1476]: time="2025-05-14T23:54:02.755088660Z" level=error msg="Failed to destroy network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.755597 containerd[1476]: time="2025-05-14T23:54:02.755570196Z" level=error msg="encountered an error cleaning up failed sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.755718 containerd[1476]: time="2025-05-14T23:54:02.755694435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.756057 kubelet[2635]: E0514 23:54:02.756016 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.756389 containerd[1476]: time="2025-05-14T23:54:02.756158146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 23:54:02.756478 kubelet[2635]: E0514 23:54:02.756262 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:54:02.756478 kubelet[2635]: E0514 23:54:02.756329 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:54:02.756663 kubelet[2635]: E0514 23:54:02.756545 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" podUID="274c1c1a-50ff-4e53-bdf7-547b26e013ec" May 14 23:54:02.761085 containerd[1476]: time="2025-05-14T23:54:02.761042299Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:02.768144 containerd[1476]: time="2025-05-14T23:54:02.766771086Z" level=error msg="Failed to destroy network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.768144 containerd[1476]: time="2025-05-14T23:54:02.767279734Z" level=error msg="encountered an error cleaning up failed sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.768144 containerd[1476]: time="2025-05-14T23:54:02.767350811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.768565 kubelet[2635]: E0514 23:54:02.767653 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.768565 kubelet[2635]: E0514 23:54:02.767729 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:54:02.768565 kubelet[2635]: E0514 23:54:02.767757 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:54:02.768791 kubelet[2635]: E0514 23:54:02.767819 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" podUID="9046d3b9-bfcc-40d1-a2ed-7f3e2193399a" May 14 23:54:02.770914 containerd[1476]: time="2025-05-14T23:54:02.770873675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:02.771496 containerd[1476]: time="2025-05-14T23:54:02.771472047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.15760724s" May 14 23:54:02.771564 containerd[1476]: time="2025-05-14T23:54:02.771503577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 23:54:02.786099 containerd[1476]: time="2025-05-14T23:54:02.785876838Z" level=error msg="Failed to destroy network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.786770 containerd[1476]: time="2025-05-14T23:54:02.786656497Z" level=error msg="encountered an error cleaning up failed sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.786770 containerd[1476]: time="2025-05-14T23:54:02.786722123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.787164 kubelet[2635]: E0514 23:54:02.787120 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.787752 kubelet[2635]: E0514 23:54:02.787734 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:54:02.787933 kubelet[2635]: E0514 23:54:02.787832 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:54:02.787933 kubelet[2635]: E0514 23:54:02.787888 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:54:02.788961 containerd[1476]: time="2025-05-14T23:54:02.788838854Z" level=info msg="CreateContainer within sandbox \"b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 23:54:02.799525 containerd[1476]: time="2025-05-14T23:54:02.799461082Z" level=error msg="Failed to destroy network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.800004 containerd[1476]: time="2025-05-14T23:54:02.799965261Z" level=error msg="encountered an error cleaning up failed sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.800067 containerd[1476]: time="2025-05-14T23:54:02.800035025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.800280 kubelet[2635]: E0514 23:54:02.800244 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.800658 kubelet[2635]: E0514 23:54:02.800380 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:54:02.800658 kubelet[2635]: E0514 23:54:02.800404 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:54:02.800658 kubelet[2635]: E0514 23:54:02.800458 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" podUID="0bd041e0-42d3-43db-a483-12474ebbedc9" May 14 23:54:02.819889 kubelet[2635]: I0514 23:54:02.819763 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b" May 14 23:54:02.820998 containerd[1476]: time="2025-05-14T23:54:02.820955327Z" level=info msg="StopPodSandbox for \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\"" May 14 23:54:02.821241 containerd[1476]: time="2025-05-14T23:54:02.821206209Z" level=info msg="Ensure that sandbox 09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b in task-service has been cleanup successfully" May 14 23:54:02.821465 containerd[1476]: time="2025-05-14T23:54:02.821444848Z" level=info msg="TearDown network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" successfully" May 14 23:54:02.821516 containerd[1476]: time="2025-05-14T23:54:02.821468854Z" level=info msg="StopPodSandbox for \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" returns successfully" May 14 23:54:02.824349 containerd[1476]: time="2025-05-14T23:54:02.824306822Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\"" May 14 23:54:02.824535 containerd[1476]: time="2025-05-14T23:54:02.824501085Z" level=info msg="TearDown network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" successfully" May 14 23:54:02.824535 containerd[1476]: time="2025-05-14T23:54:02.824532625Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" returns successfully" May 14 23:54:02.825200 containerd[1476]: time="2025-05-14T23:54:02.825159020Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" May 14 23:54:02.825342 containerd[1476]: time="2025-05-14T23:54:02.825274523Z" level=info msg="TearDown network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" successfully" May 14 23:54:02.825342 containerd[1476]: time="2025-05-14T23:54:02.825296465Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" returns successfully" May 14 23:54:02.826204 containerd[1476]: time="2025-05-14T23:54:02.826172209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:3,}" May 14 23:54:02.828008 containerd[1476]: time="2025-05-14T23:54:02.827891616Z" level=error msg="Failed to destroy network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.828381 containerd[1476]: time="2025-05-14T23:54:02.828348224Z" level=error msg="encountered an error cleaning up failed sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.828453 containerd[1476]: time="2025-05-14T23:54:02.828412758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.828990 kubelet[2635]: E0514 23:54:02.828932 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:02.829048 kubelet[2635]: E0514 23:54:02.828987 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:54:02.829048 kubelet[2635]: E0514 23:54:02.829009 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mcmz8" May 14 23:54:02.829097 kubelet[2635]: E0514 23:54:02.829052 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mcmz8_kube-system(78747a92-dcde-4a68-97b9-39a31a2ff2f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mcmz8_kube-system(78747a92-dcde-4a68-97b9-39a31a2ff2f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mcmz8" podUID="78747a92-dcde-4a68-97b9-39a31a2ff2f2" May 14 23:54:02.829793 kubelet[2635]: I0514 23:54:02.829642 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2" May 14 23:54:02.830624 containerd[1476]: time="2025-05-14T23:54:02.830578482Z" level=info msg="StopPodSandbox for \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\"" May 14 23:54:02.830940 containerd[1476]: time="2025-05-14T23:54:02.830916542Z" level=info msg="Ensure that sandbox 459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2 in task-service has been cleanup successfully" May 14 23:54:02.831224 containerd[1476]: time="2025-05-14T23:54:02.831111307Z" level=info msg="TearDown network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" successfully" May 14 23:54:02.831224 containerd[1476]: time="2025-05-14T23:54:02.831133470Z" level=info msg="StopPodSandbox for \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" returns successfully" May 14 23:54:02.831444 containerd[1476]: time="2025-05-14T23:54:02.831401004Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\"" May 14 23:54:02.831557 containerd[1476]: time="2025-05-14T23:54:02.831532968Z" level=info msg="TearDown network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" successfully" May 14 23:54:02.831611 containerd[1476]: time="2025-05-14T23:54:02.831552496Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" returns successfully" May 14 23:54:02.831919 containerd[1476]: time="2025-05-14T23:54:02.831885295Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" May 14 23:54:02.832734 containerd[1476]: time="2025-05-14T23:54:02.832702597Z" level=info msg="TearDown network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" successfully" May 14 23:54:02.832832 containerd[1476]: time="2025-05-14T23:54:02.832732826Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" returns successfully" May 14 23:54:02.833694 containerd[1476]: time="2025-05-14T23:54:02.833451598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:3,}" May 14 23:54:02.833740 kubelet[2635]: I0514 23:54:02.833607 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6" May 14 23:54:02.835106 containerd[1476]: time="2025-05-14T23:54:02.835080661Z" level=info msg="StopPodSandbox for \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\"" May 14 23:54:02.840217 kubelet[2635]: I0514 23:54:02.840177 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a" May 14 23:54:02.841693 containerd[1476]: time="2025-05-14T23:54:02.840915822Z" level=info msg="StopPodSandbox for \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\"" May 14 23:54:02.841693 containerd[1476]: time="2025-05-14T23:54:02.841231078Z" level=info msg="Ensure that sandbox a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a in task-service has been cleanup successfully" May 14 23:54:02.842586 containerd[1476]: time="2025-05-14T23:54:02.842277631Z" level=info msg="TearDown network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" successfully" May 14 23:54:02.842586 containerd[1476]: time="2025-05-14T23:54:02.842299634Z" level=info msg="StopPodSandbox for \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" returns successfully" May 14 23:54:02.842991 containerd[1476]: time="2025-05-14T23:54:02.842964532Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\"" May 14 23:54:02.843071 containerd[1476]: time="2025-05-14T23:54:02.843052601Z" level=info msg="TearDown network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" successfully" May 14 23:54:02.843103 containerd[1476]: time="2025-05-14T23:54:02.843070736Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" returns successfully" May 14 23:54:02.843448 containerd[1476]: time="2025-05-14T23:54:02.843430157Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" May 14 23:54:02.843526 containerd[1476]: time="2025-05-14T23:54:02.843512405Z" level=info msg="TearDown network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" successfully" May 14 23:54:02.843552 containerd[1476]: time="2025-05-14T23:54:02.843525561Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" returns successfully" May 14 23:54:02.844171 containerd[1476]: time="2025-05-14T23:54:02.844151103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:3,}" May 14 23:54:02.850705 containerd[1476]: time="2025-05-14T23:54:02.850656955Z" level=info msg="Ensure that sandbox 5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6 in task-service has been cleanup successfully" May 14 23:54:02.851318 kubelet[2635]: I0514 23:54:02.851288 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d" May 14 23:54:02.851563 containerd[1476]: time="2025-05-14T23:54:02.851016977Z" level=info msg="TearDown network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" successfully" May 14 23:54:02.851798 containerd[1476]: time="2025-05-14T23:54:02.851565742Z" level=info msg="StopPodSandbox for \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" returns successfully" May 14 23:54:02.851940 containerd[1476]: time="2025-05-14T23:54:02.851912339Z" level=info msg="StopPodSandbox for \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\"" May 14 23:54:02.852132 containerd[1476]: time="2025-05-14T23:54:02.852106172Z" level=info msg="Ensure that sandbox 39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d in task-service has been cleanup successfully" May 14 23:54:02.852263 containerd[1476]: time="2025-05-14T23:54:02.852221603Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\"" May 14 23:54:02.859119 containerd[1476]: time="2025-05-14T23:54:02.852313270Z" level=info msg="TearDown network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" successfully" May 14 23:54:02.859193 containerd[1476]: time="2025-05-14T23:54:02.859120301Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" returns successfully" May 14 23:54:02.859193 containerd[1476]: time="2025-05-14T23:54:02.852500801Z" level=info msg="TearDown network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" successfully" May 14 23:54:02.859193 containerd[1476]: time="2025-05-14T23:54:02.859169525Z" level=info msg="StopPodSandbox for \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" returns successfully" May 14 23:54:02.859488 containerd[1476]: time="2025-05-14T23:54:02.859454544Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" May 14 23:54:02.859488 containerd[1476]: time="2025-05-14T23:54:02.859468560Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\"" May 14 23:54:02.859608 containerd[1476]: time="2025-05-14T23:54:02.859547472Z" level=info msg="TearDown network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" successfully" May 14 23:54:02.859608 containerd[1476]: time="2025-05-14T23:54:02.859555918Z" level=info msg="TearDown network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" successfully" May 14 23:54:02.859608 containerd[1476]: time="2025-05-14T23:54:02.859571298Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" returns successfully" May 14 23:54:02.859608 containerd[1476]: time="2025-05-14T23:54:02.859556229Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" returns successfully" May 14 23:54:02.859898 containerd[1476]: time="2025-05-14T23:54:02.859858761Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" May 14 23:54:02.859898 containerd[1476]: time="2025-05-14T23:54:02.859885482Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:54:02.859981 containerd[1476]: time="2025-05-14T23:54:02.859952481Z" level=info msg="TearDown network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" successfully" May 14 23:54:02.859981 containerd[1476]: time="2025-05-14T23:54:02.859966077Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" returns successfully" May 14 23:54:02.859981 containerd[1476]: time="2025-05-14T23:54:02.859973581Z" level=info msg="TearDown network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" successfully" May 14 23:54:02.860046 containerd[1476]: time="2025-05-14T23:54:02.859988140Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" returns successfully" May 14 23:54:02.860439 containerd[1476]: time="2025-05-14T23:54:02.860404350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:4,}" May 14 23:54:02.862575 containerd[1476]: time="2025-05-14T23:54:02.860406244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:3,}" May 14 23:54:03.080551 containerd[1476]: time="2025-05-14T23:54:03.080245893Z" level=info msg="CreateContainer within sandbox \"b4a578aee745596ee7047340abe675e5a8bdec6dbf174501591cb02dc2f74081\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f291feeb79942e0f400a6e3fbed2a6afc53a246343f37979e37b990f9cc01e84\"" May 14 23:54:03.080841 containerd[1476]: time="2025-05-14T23:54:03.080809517Z" level=info msg="StartContainer for \"f291feeb79942e0f400a6e3fbed2a6afc53a246343f37979e37b990f9cc01e84\"" May 14 23:54:03.118564 containerd[1476]: time="2025-05-14T23:54:03.118488213Z" level=error msg="Failed to destroy network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.119510 containerd[1476]: time="2025-05-14T23:54:03.119460472Z" level=error msg="encountered an error cleaning up failed sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.119585 containerd[1476]: time="2025-05-14T23:54:03.119535256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.119923 kubelet[2635]: E0514 23:54:03.119870 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.120116 kubelet[2635]: E0514 23:54:03.119933 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:54:03.120116 kubelet[2635]: E0514 23:54:03.119958 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" May 14 23:54:03.120116 kubelet[2635]: E0514 23:54:03.119999 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-7zxh5_calico-apiserver(9046d3b9-bfcc-40d1-a2ed-7f3e2193399a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" podUID="9046d3b9-bfcc-40d1-a2ed-7f3e2193399a" May 14 23:54:03.152981 containerd[1476]: time="2025-05-14T23:54:03.152900072Z" level=error msg="Failed to destroy network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.153999 containerd[1476]: time="2025-05-14T23:54:03.153951963Z" level=error msg="encountered an error cleaning up failed sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.154206 containerd[1476]: time="2025-05-14T23:54:03.154168168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.155679 kubelet[2635]: E0514 23:54:03.154568 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.155679 kubelet[2635]: E0514 23:54:03.154638 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:54:03.155679 kubelet[2635]: E0514 23:54:03.154661 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx5hz" May 14 23:54:03.155834 kubelet[2635]: E0514 23:54:03.154716 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zx5hz_calico-system(c3b238b4-7acc-401a-8dae-17e6c81aeb42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zx5hz" podUID="c3b238b4-7acc-401a-8dae-17e6c81aeb42" May 14 23:54:03.156623 containerd[1476]: time="2025-05-14T23:54:03.156570446Z" level=error msg="Failed to destroy network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.157316 containerd[1476]: time="2025-05-14T23:54:03.157138387Z" level=error msg="encountered an error cleaning up failed sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.157316 containerd[1476]: time="2025-05-14T23:54:03.157214413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.157517 kubelet[2635]: E0514 23:54:03.157386 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.157517 kubelet[2635]: E0514 23:54:03.157459 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:54:03.157517 kubelet[2635]: E0514 23:54:03.157489 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" May 14 23:54:03.157630 kubelet[2635]: E0514 23:54:03.157534 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6997bdb66f-xr6kr_calico-system(0bd041e0-42d3-43db-a483-12474ebbedc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" podUID="0bd041e0-42d3-43db-a483-12474ebbedc9" May 14 23:54:03.182714 systemd[1]: Started cri-containerd-f291feeb79942e0f400a6e3fbed2a6afc53a246343f37979e37b990f9cc01e84.scope - libcontainer container f291feeb79942e0f400a6e3fbed2a6afc53a246343f37979e37b990f9cc01e84. May 14 23:54:03.192350 containerd[1476]: time="2025-05-14T23:54:03.192237897Z" level=error msg="Failed to destroy network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.193079 containerd[1476]: time="2025-05-14T23:54:03.193025491Z" level=error msg="encountered an error cleaning up failed sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.193221 containerd[1476]: time="2025-05-14T23:54:03.193180429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.194906 kubelet[2635]: E0514 23:54:03.194837 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.194989 kubelet[2635]: E0514 23:54:03.194923 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:54:03.196343 kubelet[2635]: E0514 23:54:03.194957 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" May 14 23:54:03.196537 kubelet[2635]: E0514 23:54:03.196361 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564c88fc57-zsf99_calico-apiserver(274c1c1a-50ff-4e53-bdf7-547b26e013ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" podUID="274c1c1a-50ff-4e53-bdf7-547b26e013ec" May 14 23:54:03.206229 containerd[1476]: time="2025-05-14T23:54:03.206032480Z" level=error msg="Failed to destroy network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.206881 containerd[1476]: time="2025-05-14T23:54:03.206854048Z" level=error msg="encountered an error cleaning up failed sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.207069 containerd[1476]: time="2025-05-14T23:54:03.207020849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.207560 kubelet[2635]: E0514 23:54:03.207496 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:03.207641 kubelet[2635]: E0514 23:54:03.207580 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:54:03.207641 kubelet[2635]: E0514 23:54:03.207609 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k6rt6" May 14 23:54:03.208664 kubelet[2635]: E0514 23:54:03.208611 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k6rt6_kube-system(ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k6rt6" podUID="ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b" May 14 23:54:03.232963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a-shm.mount: Deactivated successfully. May 14 23:54:03.233094 systemd[1]: run-netns-cni\x2d80f59f95\x2d3bc4\x2d5cbf\x2d74be\x2db52f7cede5aa.mount: Deactivated successfully. May 14 23:54:03.233191 systemd[1]: run-netns-cni\x2dcfba9ae2\x2d3c61\x2d80d3\x2d8185\x2d7dad3f704acf.mount: Deactivated successfully. May 14 23:54:03.233283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2-shm.mount: Deactivated successfully. May 14 23:54:03.233380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d-shm.mount: Deactivated successfully. May 14 23:54:03.233499 systemd[1]: run-netns-cni\x2d69c4ab2f\x2dd5dd\x2d12b6\x2d55e3\x2dc6171eba6b48.mount: Deactivated successfully. May 14 23:54:03.234045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6-shm.mount: Deactivated successfully. May 14 23:54:03.234153 systemd[1]: run-netns-cni\x2de4203447\x2dbebf\x2d37f8\x2d681e\x2d4d65ca6ffe3d.mount: Deactivated successfully. May 14 23:54:03.234252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b-shm.mount: Deactivated successfully. May 14 23:54:03.253759 containerd[1476]: time="2025-05-14T23:54:03.253629980Z" level=info msg="StartContainer for \"f291feeb79942e0f400a6e3fbed2a6afc53a246343f37979e37b990f9cc01e84\" returns successfully" May 14 23:54:03.323012 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 23:54:03.324346 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 23:54:03.855702 kubelet[2635]: I0514 23:54:03.855663 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454" May 14 23:54:03.856320 containerd[1476]: time="2025-05-14T23:54:03.856214131Z" level=info msg="StopPodSandbox for \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\"" May 14 23:54:03.857780 containerd[1476]: time="2025-05-14T23:54:03.856462198Z" level=info msg="Ensure that sandbox 57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454 in task-service has been cleanup successfully" May 14 23:54:03.857780 containerd[1476]: time="2025-05-14T23:54:03.856683714Z" level=info msg="TearDown network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\" successfully" May 14 23:54:03.857780 containerd[1476]: time="2025-05-14T23:54:03.856696138Z" level=info msg="StopPodSandbox for \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\" returns successfully" May 14 23:54:03.858361 containerd[1476]: time="2025-05-14T23:54:03.858099405Z" level=info msg="StopPodSandbox for \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\"" May 14 23:54:03.858361 containerd[1476]: time="2025-05-14T23:54:03.858220367Z" level=info msg="TearDown network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" successfully" May 14 23:54:03.858361 containerd[1476]: time="2025-05-14T23:54:03.858234154Z" level=info msg="StopPodSandbox for \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" returns successfully" May 14 23:54:03.858847 containerd[1476]: time="2025-05-14T23:54:03.858823135Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\"" May 14 23:54:03.859173 containerd[1476]: time="2025-05-14T23:54:03.859100338Z" level=info msg="TearDown network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" successfully" May 14 23:54:03.859173 containerd[1476]: time="2025-05-14T23:54:03.859118133Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" returns successfully" May 14 23:54:03.859134 systemd[1]: run-netns-cni\x2ddd9d563f\x2d4ac6\x2dcfb7\x2dc4a0\x2d858829a420bd.mount: Deactivated successfully. May 14 23:54:03.859877 containerd[1476]: time="2025-05-14T23:54:03.859490518Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" May 14 23:54:03.859877 containerd[1476]: time="2025-05-14T23:54:03.859573578Z" level=info msg="TearDown network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" successfully" May 14 23:54:03.859877 containerd[1476]: time="2025-05-14T23:54:03.859582856Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" returns successfully" May 14 23:54:03.859976 kubelet[2635]: I0514 23:54:03.859782 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9" May 14 23:54:03.860649 containerd[1476]: time="2025-05-14T23:54:03.860213108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:4,}" May 14 23:54:03.861058 containerd[1476]: time="2025-05-14T23:54:03.861012334Z" level=info msg="StopPodSandbox for \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\"" May 14 23:54:03.861498 containerd[1476]: time="2025-05-14T23:54:03.861238207Z" level=info msg="Ensure that sandbox 61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9 in task-service has been cleanup successfully" May 14 23:54:03.861795 containerd[1476]: time="2025-05-14T23:54:03.861772735Z" level=info msg="TearDown network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\" successfully" May 14 23:54:03.861883 containerd[1476]: time="2025-05-14T23:54:03.861869291Z" level=info msg="StopPodSandbox for \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\" returns successfully" May 14 23:54:03.862346 containerd[1476]: time="2025-05-14T23:54:03.862299157Z" level=info msg="StopPodSandbox for \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\"" May 14 23:54:03.862672 containerd[1476]: time="2025-05-14T23:54:03.862409549Z" level=info msg="TearDown network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" successfully" May 14 23:54:03.863530 containerd[1476]: time="2025-05-14T23:54:03.863497229Z" level=info msg="StopPodSandbox for \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" returns successfully" May 14 23:54:03.863668 systemd[1]: run-netns-cni\x2d4e33e0ac\x2d5280\x2d0060\x2db5e7\x2d67d953267e88.mount: Deactivated successfully. May 14 23:54:03.864864 containerd[1476]: time="2025-05-14T23:54:03.864839228Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\"" May 14 23:54:03.864948 containerd[1476]: time="2025-05-14T23:54:03.864938319Z" level=info msg="TearDown network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" successfully" May 14 23:54:03.864973 containerd[1476]: time="2025-05-14T23:54:03.864950232Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" returns successfully" May 14 23:54:03.865537 kubelet[2635]: I0514 23:54:03.865503 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b" May 14 23:54:03.866134 containerd[1476]: time="2025-05-14T23:54:03.866095242Z" level=info msg="StopPodSandbox for \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\"" May 14 23:54:03.866346 containerd[1476]: time="2025-05-14T23:54:03.866324754Z" level=info msg="Ensure that sandbox 2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b in task-service has been cleanup successfully" May 14 23:54:03.868073 containerd[1476]: time="2025-05-14T23:54:03.868045241Z" level=info msg="TearDown network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\" successfully" May 14 23:54:03.868073 containerd[1476]: time="2025-05-14T23:54:03.868067934Z" level=info msg="StopPodSandbox for \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\" returns successfully" May 14 23:54:03.868387 kubelet[2635]: I0514 23:54:03.868363 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886" May 14 23:54:03.868784 containerd[1476]: time="2025-05-14T23:54:03.868761868Z" level=info msg="StopPodSandbox for \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\"" May 14 23:54:03.868962 containerd[1476]: time="2025-05-14T23:54:03.868940200Z" level=info msg="Ensure that sandbox 36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886 in task-service has been cleanup successfully" May 14 23:54:03.869143 containerd[1476]: time="2025-05-14T23:54:03.869123824Z" level=info msg="TearDown network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\" successfully" May 14 23:54:03.869143 containerd[1476]: time="2025-05-14T23:54:03.869140215Z" level=info msg="StopPodSandbox for \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\" returns successfully" May 14 23:54:03.869219 containerd[1476]: time="2025-05-14T23:54:03.869192946Z" level=info msg="StopPodSandbox for \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\"" May 14 23:54:03.869162 systemd[1]: run-netns-cni\x2d166085a8\x2da2c7\x2d48d9\x2d363a\x2db8a950573c41.mount: Deactivated successfully. May 14 23:54:03.869309 containerd[1476]: time="2025-05-14T23:54:03.869270536Z" level=info msg="TearDown network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" successfully" May 14 23:54:03.869309 containerd[1476]: time="2025-05-14T23:54:03.869282889Z" level=info msg="StopPodSandbox for \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" returns successfully" May 14 23:54:03.870970 containerd[1476]: time="2025-05-14T23:54:03.870914465Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\"" May 14 23:54:03.871062 containerd[1476]: time="2025-05-14T23:54:03.871007484Z" level=info msg="TearDown network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" successfully" May 14 23:54:03.871062 containerd[1476]: time="2025-05-14T23:54:03.871018565Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" returns successfully" May 14 23:54:03.871572 containerd[1476]: time="2025-05-14T23:54:03.871109380Z" level=info msg="StopPodSandbox for \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\"" May 14 23:54:03.871572 containerd[1476]: time="2025-05-14T23:54:03.871259829Z" level=info msg="TearDown network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" successfully" May 14 23:54:03.871572 containerd[1476]: time="2025-05-14T23:54:03.871270359Z" level=info msg="StopPodSandbox for \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" returns successfully" May 14 23:54:03.871975 containerd[1476]: time="2025-05-14T23:54:03.871655599Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\"" May 14 23:54:03.871975 containerd[1476]: time="2025-05-14T23:54:03.871731305Z" level=info msg="TearDown network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" successfully" May 14 23:54:03.871975 containerd[1476]: time="2025-05-14T23:54:03.871742937Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" returns successfully" May 14 23:54:03.871975 containerd[1476]: time="2025-05-14T23:54:03.871805808Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" May 14 23:54:03.871975 containerd[1476]: time="2025-05-14T23:54:03.871905881Z" level=info msg="TearDown network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" successfully" May 14 23:54:03.871975 containerd[1476]: time="2025-05-14T23:54:03.871918905Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" returns successfully" May 14 23:54:03.873008 containerd[1476]: time="2025-05-14T23:54:03.872971829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:3,}" May 14 23:54:03.873354 containerd[1476]: time="2025-05-14T23:54:03.873310490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:4,}" May 14 23:54:03.873380 systemd[1]: run-netns-cni\x2de11794ab\x2d1361\x2d5867\x2de4af\x2df0607560a0e3.mount: Deactivated successfully. May 14 23:54:03.879439 containerd[1476]: time="2025-05-14T23:54:03.874773703Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" May 14 23:54:03.879439 containerd[1476]: time="2025-05-14T23:54:03.874872612Z" level=info msg="TearDown network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" successfully" May 14 23:54:03.879439 containerd[1476]: time="2025-05-14T23:54:03.874882532Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" returns successfully" May 14 23:54:03.879439 containerd[1476]: time="2025-05-14T23:54:03.875490971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:4,}" May 14 23:54:03.879439 containerd[1476]: time="2025-05-14T23:54:03.878582172Z" level=info msg="StopPodSandbox for \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\"" May 14 23:54:03.879659 kubelet[2635]: I0514 23:54:03.878133 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67" May 14 23:54:03.880226 kubelet[2635]: I0514 23:54:03.880200 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3" May 14 23:54:03.880767 containerd[1476]: time="2025-05-14T23:54:03.880742203Z" level=info msg="StopPodSandbox for \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\"" May 14 23:54:03.881913 containerd[1476]: time="2025-05-14T23:54:03.881886092Z" level=info msg="Ensure that sandbox b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67 in task-service has been cleanup successfully" May 14 23:54:03.882005 containerd[1476]: time="2025-05-14T23:54:03.881987327Z" level=info msg="Ensure that sandbox 3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3 in task-service has been cleanup successfully" May 14 23:54:03.882311 containerd[1476]: time="2025-05-14T23:54:03.882260702Z" level=info msg="TearDown network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\" successfully" May 14 23:54:03.882311 containerd[1476]: time="2025-05-14T23:54:03.882280239Z" level=info msg="StopPodSandbox for \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\" returns successfully" May 14 23:54:03.882368 containerd[1476]: time="2025-05-14T23:54:03.882345835Z" level=info msg="TearDown network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\" successfully" May 14 23:54:03.882404 containerd[1476]: time="2025-05-14T23:54:03.882364732Z" level=info msg="StopPodSandbox for \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\" returns successfully" May 14 23:54:03.882972 containerd[1476]: time="2025-05-14T23:54:03.882730395Z" level=info msg="StopPodSandbox for \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\"" May 14 23:54:03.882972 containerd[1476]: time="2025-05-14T23:54:03.882875253Z" level=info msg="TearDown network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" successfully" May 14 23:54:03.882972 containerd[1476]: time="2025-05-14T23:54:03.882886825Z" level=info msg="StopPodSandbox for \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" returns successfully" May 14 23:54:03.883398 containerd[1476]: time="2025-05-14T23:54:03.883355907Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\"" May 14 23:54:03.883506 containerd[1476]: time="2025-05-14T23:54:03.883488201Z" level=info msg="TearDown network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" successfully" May 14 23:54:03.883538 containerd[1476]: time="2025-05-14T23:54:03.883503931Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" returns successfully" May 14 23:54:03.883591 containerd[1476]: time="2025-05-14T23:54:03.883556151Z" level=info msg="StopPodSandbox for \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\"" May 14 23:54:03.883655 containerd[1476]: time="2025-05-14T23:54:03.883638179Z" level=info msg="TearDown network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" successfully" May 14 23:54:03.883688 containerd[1476]: time="2025-05-14T23:54:03.883653248Z" level=info msg="StopPodSandbox for \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" returns successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.884761558Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.884861190Z" level=info msg="TearDown network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.884871620Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" returns successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.884944631Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\"" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885010227Z" level=info msg="TearDown network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885024223Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" returns successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885247493Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885333478Z" level=info msg="TearDown network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885344008Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" returns successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885411016Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885510748Z" level=info msg="TearDown network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" successfully" May 14 23:54:03.885558 containerd[1476]: time="2025-05-14T23:54:03.885523352Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" returns successfully" May 14 23:54:03.885974 containerd[1476]: time="2025-05-14T23:54:03.885889346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:5,}" May 14 23:54:03.886181 containerd[1476]: time="2025-05-14T23:54:03.886155588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:4,}" May 14 23:54:04.228530 systemd[1]: run-netns-cni\x2d5d724038\x2dd162\x2dc25a\x2dbe5d\x2d959a854e7df8.mount: Deactivated successfully. May 14 23:54:04.228663 systemd[1]: run-netns-cni\x2d776f048e\x2d4065\x2d5a88\x2dc162\x2db18bd41907ab.mount: Deactivated successfully. May 14 23:54:04.710463 systemd-networkd[1412]: calie68679b7031: Link UP May 14 23:54:04.711622 systemd-networkd[1412]: calie68679b7031: Gained carrier May 14 23:54:05.171597 kubelet[2635]: I0514 23:54:05.171173 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hznl7" podStartSLOduration=3.481628391 podStartE2EDuration="26.171147634s" podCreationTimestamp="2025-05-14 23:53:39 +0000 UTC" firstStartedPulling="2025-05-14 23:53:40.089081272 +0000 UTC m=+11.868311807" lastFinishedPulling="2025-05-14 23:54:02.778600515 +0000 UTC m=+34.557831050" observedRunningTime="2025-05-14 23:54:03.915005516 +0000 UTC m=+35.694236051" watchObservedRunningTime="2025-05-14 23:54:05.171147634 +0000 UTC m=+36.950378169" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.032 [INFO][4429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.065 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0 calico-apiserver-564c88fc57- calico-apiserver 9046d3b9-bfcc-40d1-a2ed-7f3e2193399a 679 0 2025-05-14 23:53:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564c88fc57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-564c88fc57-7zxh5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie68679b7031 [] []}} ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.066 [INFO][4429] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.277 [INFO][4445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" HandleID="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Workload="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.289 [INFO][4445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" HandleID="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Workload="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ac3d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-564c88fc57-7zxh5", "timestamp":"2025-05-14 23:54:04.277585059 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.289 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.289 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.289 [INFO][4445] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.291 [INFO][4445] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.297 [INFO][4445] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.313 [INFO][4445] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.315 [INFO][4445] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.318 [INFO][4445] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.318 [INFO][4445] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.321 [INFO][4445] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07 May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.426 [INFO][4445] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.608 [INFO][4445] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.608 [INFO][4445] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" host="localhost" May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.608 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:05.174766 containerd[1476]: 2025-05-14 23:54:04.608 [INFO][4445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" HandleID="k8s-pod-network.8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Workload="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" May 14 23:54:05.175563 containerd[1476]: 2025-05-14 23:54:04.612 [INFO][4429] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0", GenerateName:"calico-apiserver-564c88fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"9046d3b9-bfcc-40d1-a2ed-7f3e2193399a", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c88fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-564c88fc57-7zxh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie68679b7031", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:05.175563 containerd[1476]: 2025-05-14 23:54:04.613 [INFO][4429] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" May 14 23:54:05.175563 containerd[1476]: 2025-05-14 23:54:04.613 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie68679b7031 ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" May 14 23:54:05.175563 containerd[1476]: 2025-05-14 23:54:04.709 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" May 14 23:54:05.175563 containerd[1476]: 2025-05-14 23:54:04.710 [INFO][4429] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0", GenerateName:"calico-apiserver-564c88fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"9046d3b9-bfcc-40d1-a2ed-7f3e2193399a", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c88fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07", Pod:"calico-apiserver-564c88fc57-7zxh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie68679b7031", MAC:"7a:95:c4:21:13:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:05.175563 containerd[1476]: 2025-05-14 23:54:05.172 [INFO][4429] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-7zxh5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--7zxh5-eth0" May 14 23:54:05.684455 kernel: bpftool[4617]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 23:54:05.935279 systemd-networkd[1412]: vxlan.calico: Link UP May 14 23:54:05.935289 systemd-networkd[1412]: vxlan.calico: Gained carrier May 14 23:54:06.141523 systemd-networkd[1412]: calie68679b7031: Gained IPv6LL May 14 23:54:06.371373 containerd[1476]: time="2025-05-14T23:54:06.371190671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:06.371373 containerd[1476]: time="2025-05-14T23:54:06.371264172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:06.371373 containerd[1476]: time="2025-05-14T23:54:06.371279551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:06.373332 containerd[1476]: time="2025-05-14T23:54:06.371410732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:06.399597 systemd[1]: Started cri-containerd-8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07.scope - libcontainer container 8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07. May 14 23:54:06.414178 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:06.461814 containerd[1476]: time="2025-05-14T23:54:06.461752041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-7zxh5,Uid:9046d3b9-bfcc-40d1-a2ed-7f3e2193399a,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07\"" May 14 23:54:06.463951 containerd[1476]: time="2025-05-14T23:54:06.463857621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 23:54:07.572305 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:54912.service - OpenSSH per-connection server daemon (10.0.0.1:54912). May 14 23:54:07.631794 systemd-networkd[1412]: cali8869f5640bc: Link UP May 14 23:54:07.632397 systemd-networkd[1412]: cali8869f5640bc: Gained carrier May 14 23:54:07.697156 sshd[4848]: Accepted publickey for core from 10.0.0.1 port 54912 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:07.699249 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:07.704662 systemd-logind[1460]: New session 10 of user core. May 14 23:54:07.711711 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.375 [INFO][4692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0 coredns-668d6bf9bc- kube-system 78747a92-dcde-4a68-97b9-39a31a2ff2f2 680 0 2025-05-14 23:53:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mcmz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8869f5640bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.375 [INFO][4692] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.417 [INFO][4731] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" HandleID="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Workload="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.838 [INFO][4731] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" HandleID="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Workload="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042f0c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mcmz8", "timestamp":"2025-05-14 23:54:06.417699753 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.838 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.838 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.838 [INFO][4731] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:06.906 [INFO][4731] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.345 [INFO][4731] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.352 [INFO][4731] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.353 [INFO][4731] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.355 [INFO][4731] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.355 [INFO][4731] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.357 [INFO][4731] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3 May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.579 [INFO][4731] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.625 [INFO][4731] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.625 [INFO][4731] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" host="localhost" May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.625 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:07.838940 containerd[1476]: 2025-05-14 23:54:07.625 [INFO][4731] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" HandleID="k8s-pod-network.58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Workload="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" May 14 23:54:07.841888 containerd[1476]: 2025-05-14 23:54:07.629 [INFO][4692] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"78747a92-dcde-4a68-97b9-39a31a2ff2f2", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mcmz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8869f5640bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:07.841888 containerd[1476]: 2025-05-14 23:54:07.629 [INFO][4692] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" May 14 23:54:07.841888 containerd[1476]: 2025-05-14 23:54:07.629 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8869f5640bc ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" May 14 23:54:07.841888 containerd[1476]: 2025-05-14 23:54:07.632 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" May 14 23:54:07.841888 containerd[1476]: 2025-05-14 23:54:07.632 [INFO][4692] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"78747a92-dcde-4a68-97b9-39a31a2ff2f2", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3", Pod:"coredns-668d6bf9bc-mcmz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8869f5640bc", MAC:"7a:58:a3:50:c3:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:07.841888 containerd[1476]: 2025-05-14 23:54:07.831 [INFO][4692] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3" Namespace="kube-system" Pod="coredns-668d6bf9bc-mcmz8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mcmz8-eth0" May 14 23:54:07.917988 sshd[4851]: Connection closed by 10.0.0.1 port 54912 May 14 23:54:07.918443 sshd-session[4848]: pam_unix(sshd:session): session closed for user core May 14 23:54:07.924396 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:54912.service: Deactivated successfully. May 14 23:54:07.926729 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:54:07.927860 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. May 14 23:54:07.928892 systemd-logind[1460]: Removed session 10. May 14 23:54:07.932598 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL May 14 23:54:08.037109 containerd[1476]: time="2025-05-14T23:54:08.036993214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:08.037109 containerd[1476]: time="2025-05-14T23:54:08.037066655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:08.037109 containerd[1476]: time="2025-05-14T23:54:08.037077667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:08.037295 containerd[1476]: time="2025-05-14T23:54:08.037162358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:08.059686 systemd[1]: Started cri-containerd-58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3.scope - libcontainer container 58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3. May 14 23:54:08.074116 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:08.103700 containerd[1476]: time="2025-05-14T23:54:08.103550997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mcmz8,Uid:78747a92-dcde-4a68-97b9-39a31a2ff2f2,Namespace:kube-system,Attempt:3,} returns sandbox id \"58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3\"" May 14 23:54:08.106923 containerd[1476]: time="2025-05-14T23:54:08.106882513Z" level=info msg="CreateContainer within sandbox \"58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:54:08.970589 systemd-networkd[1412]: cali7a59f17bd99: Link UP May 14 23:54:08.970857 systemd-networkd[1412]: cali7a59f17bd99: Gained carrier May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:06.650 [INFO][4770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zx5hz-eth0 csi-node-driver- calico-system c3b238b4-7acc-401a-8dae-17e6c81aeb42 580 0 2025-05-14 23:53:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zx5hz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7a59f17bd99 [] []}} ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:06.650 [INFO][4770] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-eth0" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:06.691 [INFO][4791] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" HandleID="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Workload="localhost-k8s-csi--node--driver--zx5hz-eth0" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:06.906 [INFO][4791] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" HandleID="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Workload="localhost-k8s-csi--node--driver--zx5hz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zx5hz", "timestamp":"2025-05-14 23:54:06.691276464 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:06.906 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:07.626 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:07.626 [INFO][4791] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:07.865 [INFO][4791] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.108 [INFO][4791] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.511 [INFO][4791] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.599 [INFO][4791] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.601 [INFO][4791] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.601 [INFO][4791] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.603 [INFO][4791] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15 May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.774 [INFO][4791] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.963 [INFO][4791] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.963 [INFO][4791] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" host="localhost" May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.963 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:09.158796 containerd[1476]: 2025-05-14 23:54:08.963 [INFO][4791] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" HandleID="k8s-pod-network.367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Workload="localhost-k8s-csi--node--driver--zx5hz-eth0" May 14 23:54:09.159702 containerd[1476]: 2025-05-14 23:54:08.967 [INFO][4770] cni-plugin/k8s.go 386: Populated endpoint ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zx5hz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3b238b4-7acc-401a-8dae-17e6c81aeb42", ResourceVersion:"580", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zx5hz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a59f17bd99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:09.159702 containerd[1476]: 2025-05-14 23:54:08.967 [INFO][4770] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-eth0" May 14 23:54:09.159702 containerd[1476]: 2025-05-14 23:54:08.967 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a59f17bd99 ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-eth0" May 14 23:54:09.159702 containerd[1476]: 2025-05-14 23:54:08.970 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-eth0" May 14 23:54:09.159702 containerd[1476]: 2025-05-14 23:54:08.972 [INFO][4770] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zx5hz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3b238b4-7acc-401a-8dae-17e6c81aeb42", ResourceVersion:"580", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15", Pod:"csi-node-driver-zx5hz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a59f17bd99", MAC:"e6:31:70:d7:fd:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:09.159702 containerd[1476]: 2025-05-14 23:54:09.154 [INFO][4770] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15" Namespace="calico-system" Pod="csi-node-driver-zx5hz" WorkloadEndpoint="localhost-k8s-csi--node--driver--zx5hz-eth0" May 14 23:54:09.276679 systemd-networkd[1412]: cali8869f5640bc: Gained IPv6LL May 14 23:54:09.341463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328614062.mount: Deactivated successfully. May 14 23:54:09.424613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123035809.mount: Deactivated successfully. May 14 23:54:09.514487 systemd-networkd[1412]: califb06a3600c1: Link UP May 14 23:54:09.514733 systemd-networkd[1412]: califb06a3600c1: Gained carrier May 14 23:54:09.543006 containerd[1476]: time="2025-05-14T23:54:09.542100103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:09.543006 containerd[1476]: time="2025-05-14T23:54:09.542854428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:09.543006 containerd[1476]: time="2025-05-14T23:54:09.542872733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:09.543195 containerd[1476]: time="2025-05-14T23:54:09.542995709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:09.559557 systemd[1]: Started cri-containerd-367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15.scope - libcontainer container 367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15. May 14 23:54:09.572665 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:09.584835 containerd[1476]: time="2025-05-14T23:54:09.584790987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx5hz,Uid:c3b238b4-7acc-401a-8dae-17e6c81aeb42,Namespace:calico-system,Attempt:4,} returns sandbox id \"367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15\"" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:06.649 [INFO][4749] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0 calico-apiserver-564c88fc57- calico-apiserver 274c1c1a-50ff-4e53-bdf7-547b26e013ec 677 0 2025-05-14 23:53:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564c88fc57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-564c88fc57-zsf99 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb06a3600c1 [] []}} ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:06.649 [INFO][4749] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:06.688 [INFO][4784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" HandleID="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Workload="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:06.906 [INFO][4784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" HandleID="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Workload="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-564c88fc57-zsf99", "timestamp":"2025-05-14 23:54:06.688377833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:06.906 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:08.963 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:08.963 [INFO][4784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:08.966 [INFO][4784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:08.971 [INFO][4784] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:08.977 [INFO][4784] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.152 [INFO][4784] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.157 [INFO][4784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.157 [INFO][4784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.203 [INFO][4784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.226 [INFO][4784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.509 [INFO][4784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.509 [INFO][4784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" host="localhost" May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.509 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:09.851955 containerd[1476]: 2025-05-14 23:54:09.509 [INFO][4784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" HandleID="k8s-pod-network.540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Workload="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" May 14 23:54:09.852684 containerd[1476]: 2025-05-14 23:54:09.512 [INFO][4749] cni-plugin/k8s.go 386: Populated endpoint ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0", GenerateName:"calico-apiserver-564c88fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"274c1c1a-50ff-4e53-bdf7-547b26e013ec", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c88fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-564c88fc57-zsf99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb06a3600c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:09.852684 containerd[1476]: 2025-05-14 23:54:09.512 [INFO][4749] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" May 14 23:54:09.852684 containerd[1476]: 2025-05-14 23:54:09.512 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb06a3600c1 ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" May 14 23:54:09.852684 containerd[1476]: 2025-05-14 23:54:09.514 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" May 14 23:54:09.852684 containerd[1476]: 2025-05-14 23:54:09.515 [INFO][4749] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0", GenerateName:"calico-apiserver-564c88fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"274c1c1a-50ff-4e53-bdf7-547b26e013ec", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564c88fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb", Pod:"calico-apiserver-564c88fc57-zsf99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb06a3600c1", MAC:"02:dc:22:aa:91:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:09.852684 containerd[1476]: 2025-05-14 23:54:09.846 [INFO][4749] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb" Namespace="calico-apiserver" Pod="calico-apiserver-564c88fc57-zsf99" WorkloadEndpoint="localhost-k8s-calico--apiserver--564c88fc57--zsf99-eth0" May 14 23:54:10.111262 containerd[1476]: time="2025-05-14T23:54:10.111107854Z" level=info msg="CreateContainer within sandbox \"58b2823e5526d3a5d9f0cca20b92690b48dee6a5d60c686a02ddf660ca35fed3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1178cfb2bc8fc58e93f5c9d4ff530d822837b9a1866a0b8325776b1dff3883b6\"" May 14 23:54:10.111824 containerd[1476]: time="2025-05-14T23:54:10.111779580Z" level=info msg="StartContainer for \"1178cfb2bc8fc58e93f5c9d4ff530d822837b9a1866a0b8325776b1dff3883b6\"" May 14 23:54:10.140566 systemd[1]: Started cri-containerd-1178cfb2bc8fc58e93f5c9d4ff530d822837b9a1866a0b8325776b1dff3883b6.scope - libcontainer container 1178cfb2bc8fc58e93f5c9d4ff530d822837b9a1866a0b8325776b1dff3883b6. May 14 23:54:10.184472 systemd-networkd[1412]: caliad3ac9830f1: Link UP May 14 23:54:10.186335 systemd-networkd[1412]: caliad3ac9830f1: Gained carrier May 14 23:54:10.275326 containerd[1476]: time="2025-05-14T23:54:10.275232591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:10.275326 containerd[1476]: time="2025-05-14T23:54:10.275287717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:10.275326 containerd[1476]: time="2025-05-14T23:54:10.275298047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:10.276457 containerd[1476]: time="2025-05-14T23:54:10.276392683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:10.313601 systemd[1]: Started cri-containerd-540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb.scope - libcontainer container 540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb. May 14 23:54:10.328557 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:07.349 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0 coredns-668d6bf9bc- kube-system ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b 674 0 2025-05-14 23:53:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-k6rt6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad3ac9830f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:07.349 [INFO][4801] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:07.383 [INFO][4831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" HandleID="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Workload="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:07.582 [INFO][4831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" HandleID="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Workload="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-k6rt6", "timestamp":"2025-05-14 23:54:07.383814346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:07.582 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.509 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.509 [INFO][4831] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.847 [INFO][4831] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.904 [INFO][4831] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.920 [INFO][4831] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.921 [INFO][4831] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.923 [INFO][4831] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.923 [INFO][4831] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:09.925 [INFO][4831] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:10.105 [INFO][4831] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:10.171 [INFO][4831] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:10.171 [INFO][4831] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" host="localhost" May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:10.171 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:10.363828 containerd[1476]: 2025-05-14 23:54:10.171 [INFO][4831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" HandleID="k8s-pod-network.4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Workload="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" May 14 23:54:10.365397 containerd[1476]: 2025-05-14 23:54:10.177 [INFO][4801] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-k6rt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad3ac9830f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:10.365397 containerd[1476]: 2025-05-14 23:54:10.177 [INFO][4801] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" May 14 23:54:10.365397 containerd[1476]: 2025-05-14 23:54:10.177 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad3ac9830f1 ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" May 14 23:54:10.365397 containerd[1476]: 2025-05-14 23:54:10.184 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" May 14 23:54:10.365397 containerd[1476]: 2025-05-14 23:54:10.188 [INFO][4801] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d", Pod:"coredns-668d6bf9bc-k6rt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad3ac9830f1", MAC:"12:64:eb:7d:49:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:10.365397 containerd[1476]: 2025-05-14 23:54:10.357 [INFO][4801] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-k6rt6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k6rt6-eth0" May 14 23:54:10.555045 containerd[1476]: time="2025-05-14T23:54:10.554967809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564c88fc57-zsf99,Uid:274c1c1a-50ff-4e53-bdf7-547b26e013ec,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb\"" May 14 23:54:10.555227 containerd[1476]: time="2025-05-14T23:54:10.554992616Z" level=info msg="StartContainer for \"1178cfb2bc8fc58e93f5c9d4ff530d822837b9a1866a0b8325776b1dff3883b6\" returns successfully" May 14 23:54:10.581190 systemd-networkd[1412]: caliadc5f2acc17: Link UP May 14 23:54:10.581514 systemd-networkd[1412]: caliadc5f2acc17: Gained carrier May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:07.348 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0 calico-kube-controllers-6997bdb66f- calico-system 0bd041e0-42d3-43db-a483-12474ebbedc9 684 0 2025-05-14 23:53:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6997bdb66f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6997bdb66f-xr6kr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliadc5f2acc17 [] []}} ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:07.349 [INFO][4815] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:07.469 [INFO][4837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" HandleID="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Workload="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:08.106 [INFO][4837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" HandleID="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Workload="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6997bdb66f-xr6kr", "timestamp":"2025-05-14 23:54:07.469454388 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:08.106 [INFO][4837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.171 [INFO][4837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.172 [INFO][4837] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.175 [INFO][4837] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.188 [INFO][4837] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.352 [INFO][4837] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.354 [INFO][4837] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.357 [INFO][4837] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.358 [INFO][4837] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.363 [INFO][4837] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.399 [INFO][4837] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.574 [INFO][4837] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.574 [INFO][4837] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" host="localhost" May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.574 [INFO][4837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:10.623549 containerd[1476]: 2025-05-14 23:54:10.574 [INFO][4837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" HandleID="k8s-pod-network.85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Workload="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" May 14 23:54:10.626495 containerd[1476]: 2025-05-14 23:54:10.578 [INFO][4815] cni-plugin/k8s.go 386: Populated endpoint ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0", GenerateName:"calico-kube-controllers-6997bdb66f-", Namespace:"calico-system", SelfLink:"", UID:"0bd041e0-42d3-43db-a483-12474ebbedc9", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6997bdb66f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6997bdb66f-xr6kr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliadc5f2acc17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:10.626495 containerd[1476]: 2025-05-14 23:54:10.578 [INFO][4815] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" May 14 23:54:10.626495 containerd[1476]: 2025-05-14 23:54:10.578 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadc5f2acc17 ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" May 14 23:54:10.626495 containerd[1476]: 2025-05-14 23:54:10.581 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" May 14 23:54:10.626495 containerd[1476]: 2025-05-14 23:54:10.581 [INFO][4815] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0", GenerateName:"calico-kube-controllers-6997bdb66f-", Namespace:"calico-system", SelfLink:"", UID:"0bd041e0-42d3-43db-a483-12474ebbedc9", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6997bdb66f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd", Pod:"calico-kube-controllers-6997bdb66f-xr6kr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliadc5f2acc17", MAC:"6a:f7:93:38:af:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:10.626495 containerd[1476]: 2025-05-14 23:54:10.618 [INFO][4815] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd" Namespace="calico-system" Pod="calico-kube-controllers-6997bdb66f-xr6kr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6997bdb66f--xr6kr-eth0" May 14 23:54:10.640200 containerd[1476]: time="2025-05-14T23:54:10.639769038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:10.640200 containerd[1476]: time="2025-05-14T23:54:10.640154556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:10.640200 containerd[1476]: time="2025-05-14T23:54:10.640177921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:10.641152 containerd[1476]: time="2025-05-14T23:54:10.640316987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:10.672611 systemd[1]: Started cri-containerd-4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d.scope - libcontainer container 4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d. May 14 23:54:10.676891 containerd[1476]: time="2025-05-14T23:54:10.676271382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:10.676891 containerd[1476]: time="2025-05-14T23:54:10.676354420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:10.676891 containerd[1476]: time="2025-05-14T23:54:10.676372956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:10.676891 containerd[1476]: time="2025-05-14T23:54:10.676502694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:10.685196 systemd-networkd[1412]: califb06a3600c1: Gained IPv6LL May 14 23:54:10.691279 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:10.708567 systemd[1]: Started cri-containerd-85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd.scope - libcontainer container 85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd. May 14 23:54:10.731125 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:10.731402 containerd[1476]: time="2025-05-14T23:54:10.731134919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6rt6,Uid:ccaedbdf-74a7-4eb4-b5a0-f8e0530aad2b,Namespace:kube-system,Attempt:5,} returns sandbox id \"4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d\"" May 14 23:54:10.736000 containerd[1476]: time="2025-05-14T23:54:10.735943267Z" level=info msg="CreateContainer within sandbox \"4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:54:10.767589 containerd[1476]: time="2025-05-14T23:54:10.767542250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6997bdb66f-xr6kr,Uid:0bd041e0-42d3-43db-a483-12474ebbedc9,Namespace:calico-system,Attempt:4,} returns sandbox id \"85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd\"" May 14 23:54:10.877624 systemd-networkd[1412]: cali7a59f17bd99: Gained IPv6LL May 14 23:54:11.260738 systemd-networkd[1412]: caliad3ac9830f1: Gained IPv6LL May 14 23:54:11.603751 containerd[1476]: time="2025-05-14T23:54:11.603566199Z" level=info msg="CreateContainer within sandbox \"4d7413f9ae84150dc17926770f5fe1083e68cd7a5bdfa2c09d222564bf29ab4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e326a00175b01ee16bdc83323b10efd217c163c87876c93884fb901f3eb4c5b6\"" May 14 23:54:11.604328 containerd[1476]: time="2025-05-14T23:54:11.604281287Z" level=info msg="StartContainer for \"e326a00175b01ee16bdc83323b10efd217c163c87876c93884fb901f3eb4c5b6\"" May 14 23:54:11.646611 systemd[1]: Started cri-containerd-e326a00175b01ee16bdc83323b10efd217c163c87876c93884fb901f3eb4c5b6.scope - libcontainer container e326a00175b01ee16bdc83323b10efd217c163c87876c93884fb901f3eb4c5b6. May 14 23:54:11.956499 containerd[1476]: time="2025-05-14T23:54:11.956447860Z" level=info msg="StartContainer for \"e326a00175b01ee16bdc83323b10efd217c163c87876c93884fb901f3eb4c5b6\" returns successfully" May 14 23:54:12.033580 kubelet[2635]: I0514 23:54:12.033508 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mcmz8" podStartSLOduration=39.033484816 podStartE2EDuration="39.033484816s" podCreationTimestamp="2025-05-14 23:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:10.970087475 +0000 UTC m=+42.749318010" watchObservedRunningTime="2025-05-14 23:54:12.033484816 +0000 UTC m=+43.812715352" May 14 23:54:12.348680 systemd-networkd[1412]: caliadc5f2acc17: Gained IPv6LL May 14 23:54:12.936666 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:54928.service - OpenSSH per-connection server daemon (10.0.0.1:54928). May 14 23:54:13.009409 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 54928 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:13.011864 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:13.020491 systemd-logind[1460]: New session 11 of user core. May 14 23:54:13.026584 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:54:13.240888 kubelet[2635]: I0514 23:54:13.240741 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k6rt6" podStartSLOduration=40.24071531 podStartE2EDuration="40.24071531s" podCreationTimestamp="2025-05-14 23:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:12.991536149 +0000 UTC m=+44.770766684" watchObservedRunningTime="2025-05-14 23:54:13.24071531 +0000 UTC m=+45.019945845" May 14 23:54:13.250810 sshd[5236]: Connection closed by 10.0.0.1 port 54928 May 14 23:54:13.251687 sshd-session[5232]: pam_unix(sshd:session): session closed for user core May 14 23:54:13.256711 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:54928.service: Deactivated successfully. May 14 23:54:13.259190 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:54:13.260055 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. May 14 23:54:13.261549 systemd-logind[1460]: Removed session 11. May 14 23:54:15.646765 containerd[1476]: time="2025-05-14T23:54:15.646672239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:15.648734 containerd[1476]: time="2025-05-14T23:54:15.648650991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 23:54:15.654498 containerd[1476]: time="2025-05-14T23:54:15.654450233Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:15.659433 containerd[1476]: time="2025-05-14T23:54:15.659367951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:15.660526 containerd[1476]: time="2025-05-14T23:54:15.660490836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 9.19527535s" May 14 23:54:15.660600 containerd[1476]: time="2025-05-14T23:54:15.660525052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 23:54:15.676359 containerd[1476]: time="2025-05-14T23:54:15.676297924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 23:54:15.677497 containerd[1476]: time="2025-05-14T23:54:15.677313705Z" level=info msg="CreateContainer within sandbox \"8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 23:54:15.727827 containerd[1476]: time="2025-05-14T23:54:15.727750979Z" level=info msg="CreateContainer within sandbox \"8959dc395cce929be2480e5b4f58d73522c1416f7bad1c936ef7efc5492f6f07\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96522cbffc8cfd5b8f682adb8ca1c6d5252bbc5acdb2adeb0206c4bab7e907cc\"" May 14 23:54:15.730132 containerd[1476]: time="2025-05-14T23:54:15.728554545Z" level=info msg="StartContainer for \"96522cbffc8cfd5b8f682adb8ca1c6d5252bbc5acdb2adeb0206c4bab7e907cc\"" May 14 23:54:15.769697 systemd[1]: Started cri-containerd-96522cbffc8cfd5b8f682adb8ca1c6d5252bbc5acdb2adeb0206c4bab7e907cc.scope - libcontainer container 96522cbffc8cfd5b8f682adb8ca1c6d5252bbc5acdb2adeb0206c4bab7e907cc. May 14 23:54:15.827999 containerd[1476]: time="2025-05-14T23:54:15.827921904Z" level=info msg="StartContainer for \"96522cbffc8cfd5b8f682adb8ca1c6d5252bbc5acdb2adeb0206c4bab7e907cc\" returns successfully" May 14 23:54:16.061409 kubelet[2635]: I0514 23:54:16.060415 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564c88fc57-7zxh5" podStartSLOduration=27.847861788 podStartE2EDuration="37.060390002s" podCreationTimestamp="2025-05-14 23:53:39 +0000 UTC" firstStartedPulling="2025-05-14 23:54:06.463522367 +0000 UTC m=+38.242752902" lastFinishedPulling="2025-05-14 23:54:15.676050581 +0000 UTC m=+47.455281116" observedRunningTime="2025-05-14 23:54:16.060019404 +0000 UTC m=+47.839249939" watchObservedRunningTime="2025-05-14 23:54:16.060390002 +0000 UTC m=+47.839620537" May 14 23:54:16.983177 kubelet[2635]: I0514 23:54:16.983132 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:18.107139 containerd[1476]: time="2025-05-14T23:54:18.107065178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:18.122940 containerd[1476]: time="2025-05-14T23:54:18.122852002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 23:54:18.125656 containerd[1476]: time="2025-05-14T23:54:18.125627189Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:18.133166 containerd[1476]: time="2025-05-14T23:54:18.133100896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:18.133827 containerd[1476]: time="2025-05-14T23:54:18.133773730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.457428607s" May 14 23:54:18.133827 containerd[1476]: time="2025-05-14T23:54:18.133817403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 23:54:18.134808 containerd[1476]: time="2025-05-14T23:54:18.134777526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 23:54:18.137282 containerd[1476]: time="2025-05-14T23:54:18.137246950Z" level=info msg="CreateContainer within sandbox \"367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 23:54:18.169285 containerd[1476]: time="2025-05-14T23:54:18.169230130Z" level=info msg="CreateContainer within sandbox \"367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2cdd06b5c64ce0ea1e1e4cbfad63ff58045b7f7ce75d73a8b753ec6e1df59fc4\"" May 14 23:54:18.169967 containerd[1476]: time="2025-05-14T23:54:18.169932871Z" level=info msg="StartContainer for \"2cdd06b5c64ce0ea1e1e4cbfad63ff58045b7f7ce75d73a8b753ec6e1df59fc4\"" May 14 23:54:18.208563 systemd[1]: Started cri-containerd-2cdd06b5c64ce0ea1e1e4cbfad63ff58045b7f7ce75d73a8b753ec6e1df59fc4.scope - libcontainer container 2cdd06b5c64ce0ea1e1e4cbfad63ff58045b7f7ce75d73a8b753ec6e1df59fc4. May 14 23:54:18.256004 containerd[1476]: time="2025-05-14T23:54:18.255766674Z" level=info msg="StartContainer for \"2cdd06b5c64ce0ea1e1e4cbfad63ff58045b7f7ce75d73a8b753ec6e1df59fc4\" returns successfully" May 14 23:54:18.264888 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:44830.service - OpenSSH per-connection server daemon (10.0.0.1:44830). May 14 23:54:18.323244 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 44830 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:18.325226 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:18.330079 systemd-logind[1460]: New session 12 of user core. May 14 23:54:18.343702 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:54:18.481034 sshd[5348]: Connection closed by 10.0.0.1 port 44830 May 14 23:54:18.481523 sshd-session[5346]: pam_unix(sshd:session): session closed for user core May 14 23:54:18.493673 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:44830.service: Deactivated successfully. May 14 23:54:18.495911 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:54:18.498216 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. May 14 23:54:18.504861 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:44842.service - OpenSSH per-connection server daemon (10.0.0.1:44842). May 14 23:54:18.507862 systemd-logind[1460]: Removed session 12. May 14 23:54:18.545495 sshd[5362]: Accepted publickey for core from 10.0.0.1 port 44842 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:18.547314 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:18.551784 systemd-logind[1460]: New session 13 of user core. May 14 23:54:18.561603 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:54:18.620982 containerd[1476]: time="2025-05-14T23:54:18.620913907Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:18.628209 containerd[1476]: time="2025-05-14T23:54:18.628079094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 23:54:18.630071 containerd[1476]: time="2025-05-14T23:54:18.629995513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 495.185675ms" May 14 23:54:18.630071 containerd[1476]: time="2025-05-14T23:54:18.630035890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 23:54:18.631446 containerd[1476]: time="2025-05-14T23:54:18.631307557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 23:54:18.633949 containerd[1476]: time="2025-05-14T23:54:18.633885909Z" level=info msg="CreateContainer within sandbox \"540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 23:54:18.658437 containerd[1476]: time="2025-05-14T23:54:18.658349445Z" level=info msg="CreateContainer within sandbox \"540302443e16946b9c4fabfd6fc9012a9cb47e2f42d5ff37b9f2917ad14028eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ec189a9425a2b4e12b29d1c7ef2a060c7e6d09a449bb76acc85e6533ae514a7\"" May 14 23:54:18.659138 containerd[1476]: time="2025-05-14T23:54:18.659106209Z" level=info msg="StartContainer for \"2ec189a9425a2b4e12b29d1c7ef2a060c7e6d09a449bb76acc85e6533ae514a7\"" May 14 23:54:18.692593 systemd[1]: Started cri-containerd-2ec189a9425a2b4e12b29d1c7ef2a060c7e6d09a449bb76acc85e6533ae514a7.scope - libcontainer container 2ec189a9425a2b4e12b29d1c7ef2a060c7e6d09a449bb76acc85e6533ae514a7. May 14 23:54:18.809763 containerd[1476]: time="2025-05-14T23:54:18.809638070Z" level=info msg="StartContainer for \"2ec189a9425a2b4e12b29d1c7ef2a060c7e6d09a449bb76acc85e6533ae514a7\" returns successfully" May 14 23:54:18.862889 sshd[5367]: Connection closed by 10.0.0.1 port 44842 May 14 23:54:18.863821 sshd-session[5362]: pam_unix(sshd:session): session closed for user core May 14 23:54:18.883349 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:44846.service - OpenSSH per-connection server daemon (10.0.0.1:44846). May 14 23:54:18.884799 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:44842.service: Deactivated successfully. May 14 23:54:18.892306 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:54:18.893984 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. May 14 23:54:18.895635 systemd-logind[1460]: Removed session 13. May 14 23:54:18.943456 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 44846 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:18.945235 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:18.954657 systemd-logind[1460]: New session 14 of user core. May 14 23:54:18.965700 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:54:19.247182 kubelet[2635]: I0514 23:54:19.246836 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564c88fc57-zsf99" podStartSLOduration=32.172361101999996 podStartE2EDuration="40.246811563s" podCreationTimestamp="2025-05-14 23:53:39 +0000 UTC" firstStartedPulling="2025-05-14 23:54:10.556634321 +0000 UTC m=+42.335864856" lastFinishedPulling="2025-05-14 23:54:18.631084762 +0000 UTC m=+50.410315317" observedRunningTime="2025-05-14 23:54:19.245953627 +0000 UTC m=+51.025184162" watchObservedRunningTime="2025-05-14 23:54:19.246811563 +0000 UTC m=+51.026042098" May 14 23:54:19.257186 sshd[5420]: Connection closed by 10.0.0.1 port 44846 May 14 23:54:19.257718 sshd-session[5415]: pam_unix(sshd:session): session closed for user core May 14 23:54:19.262815 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:44846.service: Deactivated successfully. May 14 23:54:19.266963 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:54:19.268247 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. May 14 23:54:19.269731 systemd-logind[1460]: Removed session 14. May 14 23:54:20.504052 containerd[1476]: time="2025-05-14T23:54:20.503968358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:20.533396 containerd[1476]: time="2025-05-14T23:54:20.533284790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 23:54:20.558100 containerd[1476]: time="2025-05-14T23:54:20.558002619Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:20.569016 containerd[1476]: time="2025-05-14T23:54:20.568920604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:20.569967 containerd[1476]: time="2025-05-14T23:54:20.569912867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 1.938567559s" May 14 23:54:20.570063 containerd[1476]: time="2025-05-14T23:54:20.569973333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 23:54:20.571272 containerd[1476]: time="2025-05-14T23:54:20.571248255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 23:54:20.584275 containerd[1476]: time="2025-05-14T23:54:20.584220700Z" level=info msg="CreateContainer within sandbox \"85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 23:54:20.671378 containerd[1476]: time="2025-05-14T23:54:20.671291938Z" level=info msg="CreateContainer within sandbox \"85a424c4d64340a0ea2ec6917a95fbc9ff22a9d6e212e1831f3bacf2ff94b2cd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"099df836a3ff2566b361d43c20bf9149f418f115e201fb44f86f5ddeed8841fd\"" May 14 23:54:20.673244 containerd[1476]: time="2025-05-14T23:54:20.671933763Z" level=info msg="StartContainer for \"099df836a3ff2566b361d43c20bf9149f418f115e201fb44f86f5ddeed8841fd\"" May 14 23:54:20.702627 systemd[1]: Started cri-containerd-099df836a3ff2566b361d43c20bf9149f418f115e201fb44f86f5ddeed8841fd.scope - libcontainer container 099df836a3ff2566b361d43c20bf9149f418f115e201fb44f86f5ddeed8841fd. May 14 23:54:21.635058 containerd[1476]: time="2025-05-14T23:54:21.634971592Z" level=info msg="StartContainer for \"099df836a3ff2566b361d43c20bf9149f418f115e201fb44f86f5ddeed8841fd\" returns successfully" May 14 23:54:22.768654 kubelet[2635]: I0514 23:54:22.768541 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6997bdb66f-xr6kr" podStartSLOduration=33.966701594 podStartE2EDuration="43.768513934s" podCreationTimestamp="2025-05-14 23:53:39 +0000 UTC" firstStartedPulling="2025-05-14 23:54:10.769247075 +0000 UTC m=+42.548477610" lastFinishedPulling="2025-05-14 23:54:20.571059415 +0000 UTC m=+52.350289950" observedRunningTime="2025-05-14 23:54:22.766776953 +0000 UTC m=+54.546007488" watchObservedRunningTime="2025-05-14 23:54:22.768513934 +0000 UTC m=+54.547744469" May 14 23:54:24.271062 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:34618.service - OpenSSH per-connection server daemon (10.0.0.1:34618). May 14 23:54:24.793141 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 34618 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:24.794982 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:24.802309 systemd-logind[1460]: New session 15 of user core. May 14 23:54:24.809097 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:54:25.036716 sshd[5507]: Connection closed by 10.0.0.1 port 34618 May 14 23:54:25.039980 sshd-session[5504]: pam_unix(sshd:session): session closed for user core May 14 23:54:25.047382 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. May 14 23:54:25.047579 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:34618.service: Deactivated successfully. May 14 23:54:25.051003 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:54:25.053326 systemd-logind[1460]: Removed session 15. May 14 23:54:25.543968 containerd[1476]: time="2025-05-14T23:54:25.543867593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:25.592100 containerd[1476]: time="2025-05-14T23:54:25.591938596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 23:54:25.651982 containerd[1476]: time="2025-05-14T23:54:25.651915705Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:25.679933 containerd[1476]: time="2025-05-14T23:54:25.679867139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:25.680917 containerd[1476]: time="2025-05-14T23:54:25.680880469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 5.109594623s" May 14 23:54:25.680917 containerd[1476]: time="2025-05-14T23:54:25.680922038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 23:54:25.683340 containerd[1476]: time="2025-05-14T23:54:25.683293394Z" level=info msg="CreateContainer within sandbox \"367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 23:54:25.767753 containerd[1476]: time="2025-05-14T23:54:25.767695471Z" level=info msg="CreateContainer within sandbox \"367a5a31dfa10fe70eb08458cfa69ed8432351c28f195591507e1bba0d326c15\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c99a128cabf20e82bef68e06609c2d7c0f7f90373841450013d771c6402fac51\"" May 14 23:54:25.768753 containerd[1476]: time="2025-05-14T23:54:25.768436303Z" level=info msg="StartContainer for \"c99a128cabf20e82bef68e06609c2d7c0f7f90373841450013d771c6402fac51\"" May 14 23:54:25.817704 systemd[1]: Started cri-containerd-c99a128cabf20e82bef68e06609c2d7c0f7f90373841450013d771c6402fac51.scope - libcontainer container c99a128cabf20e82bef68e06609c2d7c0f7f90373841450013d771c6402fac51. May 14 23:54:25.943906 containerd[1476]: time="2025-05-14T23:54:25.943831279Z" level=info msg="StartContainer for \"c99a128cabf20e82bef68e06609c2d7c0f7f90373841450013d771c6402fac51\" returns successfully" May 14 23:54:26.425475 kubelet[2635]: I0514 23:54:26.425413 2635 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 23:54:26.425475 kubelet[2635]: I0514 23:54:26.425476 2635 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 23:54:27.026195 kubelet[2635]: I0514 23:54:27.026115 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zx5hz" podStartSLOduration=31.930233213 podStartE2EDuration="48.026094506s" podCreationTimestamp="2025-05-14 23:53:39 +0000 UTC" firstStartedPulling="2025-05-14 23:54:09.586079075 +0000 UTC m=+41.365309600" lastFinishedPulling="2025-05-14 23:54:25.681940368 +0000 UTC m=+57.461170893" observedRunningTime="2025-05-14 23:54:27.025184163 +0000 UTC m=+58.804414708" watchObservedRunningTime="2025-05-14 23:54:27.026094506 +0000 UTC m=+58.805325051" May 14 23:54:28.319266 containerd[1476]: time="2025-05-14T23:54:28.319025813Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" May 14 23:54:28.319266 containerd[1476]: time="2025-05-14T23:54:28.319187632Z" level=info msg="TearDown network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" successfully" May 14 23:54:28.319266 containerd[1476]: time="2025-05-14T23:54:28.319202740Z" level=info msg="StopPodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" returns successfully" May 14 23:54:28.325969 containerd[1476]: time="2025-05-14T23:54:28.325814332Z" level=info msg="RemovePodSandbox for \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" May 14 23:54:28.338453 containerd[1476]: time="2025-05-14T23:54:28.338376297Z" level=info msg="Forcibly stopping sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\"" May 14 23:54:28.338639 containerd[1476]: time="2025-05-14T23:54:28.338573542Z" level=info msg="TearDown network for sandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" successfully" May 14 23:54:28.609160 containerd[1476]: time="2025-05-14T23:54:28.609007311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:28.609160 containerd[1476]: time="2025-05-14T23:54:28.609095539Z" level=info msg="RemovePodSandbox \"7fc43a55ee56babf2baaa12a3b6ad5441c0befe576446eea7db0237519c01722\" returns successfully" May 14 23:54:28.609783 containerd[1476]: time="2025-05-14T23:54:28.609754323Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\"" May 14 23:54:28.609887 containerd[1476]: time="2025-05-14T23:54:28.609867979Z" level=info msg="TearDown network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" successfully" May 14 23:54:28.609921 containerd[1476]: time="2025-05-14T23:54:28.609883318Z" level=info msg="StopPodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" returns successfully" May 14 23:54:28.610155 containerd[1476]: time="2025-05-14T23:54:28.610113125Z" level=info msg="RemovePodSandbox for \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\"" May 14 23:54:28.610155 containerd[1476]: time="2025-05-14T23:54:28.610150175Z" level=info msg="Forcibly stopping sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\"" May 14 23:54:28.610289 containerd[1476]: time="2025-05-14T23:54:28.610236390Z" level=info msg="TearDown network for sandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" successfully" May 14 23:54:28.925554 containerd[1476]: time="2025-05-14T23:54:28.925405338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:28.925554 containerd[1476]: time="2025-05-14T23:54:28.925562967Z" level=info msg="RemovePodSandbox \"3c3adac1404d312ded0a39e3b9174a60b3e9628ed4edfb2a16682f7e5b6a0b11\" returns successfully" May 14 23:54:28.926262 containerd[1476]: time="2025-05-14T23:54:28.926207404Z" level=info msg="StopPodSandbox for \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\"" May 14 23:54:28.926369 containerd[1476]: time="2025-05-14T23:54:28.926350727Z" level=info msg="TearDown network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" successfully" May 14 23:54:28.926369 containerd[1476]: time="2025-05-14T23:54:28.926365635Z" level=info msg="StopPodSandbox for \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" returns successfully" May 14 23:54:28.926845 containerd[1476]: time="2025-05-14T23:54:28.926814148Z" level=info msg="RemovePodSandbox for \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\"" May 14 23:54:28.926845 containerd[1476]: time="2025-05-14T23:54:28.926845718Z" level=info msg="Forcibly stopping sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\"" May 14 23:54:28.927018 containerd[1476]: time="2025-05-14T23:54:28.926938886Z" level=info msg="TearDown network for sandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" successfully" May 14 23:54:29.132030 containerd[1476]: time="2025-05-14T23:54:29.131975190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:29.132199 containerd[1476]: time="2025-05-14T23:54:29.132059771Z" level=info msg="RemovePodSandbox \"39d6ed701550c898029e120310ff79888bef85ab582ebf0e2e11dec445579c7d\" returns successfully" May 14 23:54:29.132545 containerd[1476]: time="2025-05-14T23:54:29.132515217Z" level=info msg="StopPodSandbox for \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\"" May 14 23:54:29.132635 containerd[1476]: time="2025-05-14T23:54:29.132618724Z" level=info msg="TearDown network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\" successfully" May 14 23:54:29.132635 containerd[1476]: time="2025-05-14T23:54:29.132630947Z" level=info msg="StopPodSandbox for \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\" returns successfully" May 14 23:54:29.132905 containerd[1476]: time="2025-05-14T23:54:29.132867017Z" level=info msg="RemovePodSandbox for \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\"" May 14 23:54:29.132905 containerd[1476]: time="2025-05-14T23:54:29.132890581Z" level=info msg="Forcibly stopping sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\"" May 14 23:54:29.132991 containerd[1476]: time="2025-05-14T23:54:29.132957829Z" level=info msg="TearDown network for sandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\" successfully" May 14 23:54:29.643457 containerd[1476]: time="2025-05-14T23:54:29.643376123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:29.644091 containerd[1476]: time="2025-05-14T23:54:29.643492193Z" level=info msg="RemovePodSandbox \"2aa79a17a2dc09da6e0e0bd5a46633f189d465f72fc111a6912600d9e079169b\" returns successfully" May 14 23:54:29.644091 containerd[1476]: time="2025-05-14T23:54:29.644016421Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:54:29.644163 containerd[1476]: time="2025-05-14T23:54:29.644107334Z" level=info msg="TearDown network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" successfully" May 14 23:54:29.644163 containerd[1476]: time="2025-05-14T23:54:29.644116972Z" level=info msg="StopPodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" returns successfully" May 14 23:54:29.644399 containerd[1476]: time="2025-05-14T23:54:29.644367409Z" level=info msg="RemovePodSandbox for \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:54:29.644399 containerd[1476]: time="2025-05-14T23:54:29.644394541Z" level=info msg="Forcibly stopping sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\"" May 14 23:54:29.644541 containerd[1476]: time="2025-05-14T23:54:29.644489301Z" level=info msg="TearDown network for sandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" successfully" May 14 23:54:29.696683 containerd[1476]: time="2025-05-14T23:54:29.696608206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:29.696683 containerd[1476]: time="2025-05-14T23:54:29.696693749Z" level=info msg="RemovePodSandbox \"33057a585520de0bef2542176b7cbb81a8de082cb860fb2e70081344952b80fb\" returns successfully" May 14 23:54:29.697213 containerd[1476]: time="2025-05-14T23:54:29.697185664Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" May 14 23:54:29.697327 containerd[1476]: time="2025-05-14T23:54:29.697300654Z" level=info msg="TearDown network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" successfully" May 14 23:54:29.697327 containerd[1476]: time="2025-05-14T23:54:29.697316944Z" level=info msg="StopPodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" returns successfully" May 14 23:54:29.697640 containerd[1476]: time="2025-05-14T23:54:29.697614521Z" level=info msg="RemovePodSandbox for \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" May 14 23:54:29.697707 containerd[1476]: time="2025-05-14T23:54:29.697641332Z" level=info msg="Forcibly stopping sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\"" May 14 23:54:29.697797 containerd[1476]: time="2025-05-14T23:54:29.697721594Z" level=info msg="TearDown network for sandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" successfully" May 14 23:54:29.775668 containerd[1476]: time="2025-05-14T23:54:29.775595708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:29.775826 containerd[1476]: time="2025-05-14T23:54:29.775688154Z" level=info msg="RemovePodSandbox \"342f0da20d9f4dc5bad7675220b463dec5464d9133866fc2665994e8551463d5\" returns successfully" May 14 23:54:29.776264 containerd[1476]: time="2025-05-14T23:54:29.776230225Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\"" May 14 23:54:29.776389 containerd[1476]: time="2025-05-14T23:54:29.776361435Z" level=info msg="TearDown network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" successfully" May 14 23:54:29.776389 containerd[1476]: time="2025-05-14T23:54:29.776380752Z" level=info msg="StopPodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" returns successfully" May 14 23:54:29.776739 containerd[1476]: time="2025-05-14T23:54:29.776712954Z" level=info msg="RemovePodSandbox for \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\"" May 14 23:54:29.776827 containerd[1476]: time="2025-05-14T23:54:29.776742179Z" level=info msg="Forcibly stopping sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\"" May 14 23:54:29.776863 containerd[1476]: time="2025-05-14T23:54:29.776818805Z" level=info msg="TearDown network for sandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" successfully" May 14 23:54:29.835842 containerd[1476]: time="2025-05-14T23:54:29.835796870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:29.835963 containerd[1476]: time="2025-05-14T23:54:29.835880679Z" level=info msg="RemovePodSandbox \"429ffddfb89439b25885158427c0d54b54e7fa1fe4b1f966085969fb4716bd55\" returns successfully" May 14 23:54:29.836379 containerd[1476]: time="2025-05-14T23:54:29.836353309Z" level=info msg="StopPodSandbox for \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\"" May 14 23:54:29.836538 containerd[1476]: time="2025-05-14T23:54:29.836505899Z" level=info msg="TearDown network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" successfully" May 14 23:54:29.836538 containerd[1476]: time="2025-05-14T23:54:29.836524786Z" level=info msg="StopPodSandbox for \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" returns successfully" May 14 23:54:29.836876 containerd[1476]: time="2025-05-14T23:54:29.836830767Z" level=info msg="RemovePodSandbox for \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\"" May 14 23:54:29.836876 containerd[1476]: time="2025-05-14T23:54:29.836858831Z" level=info msg="Forcibly stopping sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\"" May 14 23:54:29.837116 containerd[1476]: time="2025-05-14T23:54:29.836948461Z" level=info msg="TearDown network for sandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" successfully" May 14 23:54:30.037634 containerd[1476]: time="2025-05-14T23:54:30.037447851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:30.037634 containerd[1476]: time="2025-05-14T23:54:30.037559373Z" level=info msg="RemovePodSandbox \"5838299e37b21b676e7fcfae99022d44d5f0f28676549724d6a13816c432b0a6\" returns successfully" May 14 23:54:30.038140 containerd[1476]: time="2025-05-14T23:54:30.038102826Z" level=info msg="StopPodSandbox for \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\"" May 14 23:54:30.038295 containerd[1476]: time="2025-05-14T23:54:30.038254996Z" level=info msg="TearDown network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\" successfully" May 14 23:54:30.038295 containerd[1476]: time="2025-05-14T23:54:30.038271367Z" level=info msg="StopPodSandbox for \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\" returns successfully" May 14 23:54:30.039451 containerd[1476]: time="2025-05-14T23:54:30.038724680Z" level=info msg="RemovePodSandbox for \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\"" May 14 23:54:30.039451 containerd[1476]: time="2025-05-14T23:54:30.038754226Z" level=info msg="Forcibly stopping sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\"" May 14 23:54:30.039451 containerd[1476]: time="2025-05-14T23:54:30.038854326Z" level=info msg="TearDown network for sandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\" successfully" May 14 23:54:30.050838 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:34620.service - OpenSSH per-connection server daemon (10.0.0.1:34620). May 14 23:54:30.122905 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 34620 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:30.124925 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:30.130092 systemd-logind[1460]: New session 16 of user core. May 14 23:54:30.131839 containerd[1476]: time="2025-05-14T23:54:30.131645957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:30.131839 containerd[1476]: time="2025-05-14T23:54:30.131752940Z" level=info msg="RemovePodSandbox \"b830ece64ad1ce9e3279112f6c999e3bf5e99735dd38eebd357e5af6aa158d67\" returns successfully" May 14 23:54:30.132382 containerd[1476]: time="2025-05-14T23:54:30.132285022Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\"" May 14 23:54:30.132583 containerd[1476]: time="2025-05-14T23:54:30.132487437Z" level=info msg="TearDown network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" successfully" May 14 23:54:30.132583 containerd[1476]: time="2025-05-14T23:54:30.132504370Z" level=info msg="StopPodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" returns successfully" May 14 23:54:30.133023 containerd[1476]: time="2025-05-14T23:54:30.132982629Z" level=info msg="RemovePodSandbox for \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\"" May 14 23:54:30.133023 containerd[1476]: time="2025-05-14T23:54:30.133016333Z" level=info msg="Forcibly stopping sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\"" May 14 23:54:30.133197 containerd[1476]: time="2025-05-14T23:54:30.133127996Z" level=info msg="TearDown network for sandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" successfully" May 14 23:54:30.138976 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:54:30.213711 containerd[1476]: time="2025-05-14T23:54:30.213637355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:30.213882 containerd[1476]: time="2025-05-14T23:54:30.213729670Z" level=info msg="RemovePodSandbox \"a500e5e0e5e581d11c138a050e95e4036d134b1201f8ea691935fed3c26f6323\" returns successfully" May 14 23:54:30.214452 containerd[1476]: time="2025-05-14T23:54:30.214355882Z" level=info msg="StopPodSandbox for \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\"" May 14 23:54:30.214552 containerd[1476]: time="2025-05-14T23:54:30.214522959Z" level=info msg="TearDown network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" successfully" May 14 23:54:30.214552 containerd[1476]: time="2025-05-14T23:54:30.214535213Z" level=info msg="StopPodSandbox for \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" returns successfully" May 14 23:54:30.215077 containerd[1476]: time="2025-05-14T23:54:30.215045073Z" level=info msg="RemovePodSandbox for \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\"" May 14 23:54:30.215077 containerd[1476]: time="2025-05-14T23:54:30.215065752Z" level=info msg="Forcibly stopping sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\"" May 14 23:54:30.215194 containerd[1476]: time="2025-05-14T23:54:30.215139903Z" level=info msg="TearDown network for sandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" successfully" May 14 23:54:30.321283 containerd[1476]: time="2025-05-14T23:54:30.321090562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:30.321283 containerd[1476]: time="2025-05-14T23:54:30.321200862Z" level=info msg="RemovePodSandbox \"e17a5d59d30bbf026bcd8bb593ff64be844debc60baddd4f04ee814c91cd8450\" returns successfully" May 14 23:54:30.321925 containerd[1476]: time="2025-05-14T23:54:30.321888951Z" level=info msg="StopPodSandbox for \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\"" May 14 23:54:30.322084 containerd[1476]: time="2025-05-14T23:54:30.322061218Z" level=info msg="TearDown network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\" successfully" May 14 23:54:30.322084 containerd[1476]: time="2025-05-14T23:54:30.322077019Z" level=info msg="StopPodSandbox for \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\" returns successfully" May 14 23:54:30.322590 containerd[1476]: time="2025-05-14T23:54:30.322555698Z" level=info msg="RemovePodSandbox for \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\"" May 14 23:54:30.322590 containerd[1476]: time="2025-05-14T23:54:30.322585225Z" level=info msg="Forcibly stopping sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\"" May 14 23:54:30.322773 containerd[1476]: time="2025-05-14T23:54:30.322661740Z" level=info msg="TearDown network for sandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\" successfully" May 14 23:54:30.339384 sshd[5577]: Connection closed by 10.0.0.1 port 34620 May 14 23:54:30.339866 sshd-session[5575]: pam_unix(sshd:session): session closed for user core May 14 23:54:30.347940 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:34620.service: Deactivated successfully. May 14 23:54:30.352717 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:54:30.353650 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. May 14 23:54:30.354766 systemd-logind[1460]: Removed session 16. May 14 23:54:30.559375 containerd[1476]: time="2025-05-14T23:54:30.559296793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:30.559585 containerd[1476]: time="2025-05-14T23:54:30.559403175Z" level=info msg="RemovePodSandbox \"36467c1767a5e6e07d0af580f604522cb035c535a227d35c440d02c0d9815886\" returns successfully" May 14 23:54:30.559982 containerd[1476]: time="2025-05-14T23:54:30.559957269Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" May 14 23:54:30.560175 containerd[1476]: time="2025-05-14T23:54:30.560088088Z" level=info msg="TearDown network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" successfully" May 14 23:54:30.560175 containerd[1476]: time="2025-05-14T23:54:30.560160706Z" level=info msg="StopPodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" returns successfully" May 14 23:54:30.560492 containerd[1476]: time="2025-05-14T23:54:30.560458743Z" level=info msg="RemovePodSandbox for \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" May 14 23:54:30.560570 containerd[1476]: time="2025-05-14T23:54:30.560492948Z" level=info msg="Forcibly stopping sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\"" May 14 23:54:30.560616 containerd[1476]: time="2025-05-14T23:54:30.560576767Z" level=info msg="TearDown network for sandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" successfully" May 14 23:54:31.001628 containerd[1476]: time="2025-05-14T23:54:31.001548479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.002177 containerd[1476]: time="2025-05-14T23:54:31.001669549Z" level=info msg="RemovePodSandbox \"64638f12b1279b2b0ba9e81e29f292742c3d924613846bbe2cbf009965970c49\" returns successfully" May 14 23:54:31.002463 containerd[1476]: time="2025-05-14T23:54:31.002406210Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\"" May 14 23:54:31.002600 containerd[1476]: time="2025-05-14T23:54:31.002575473Z" level=info msg="TearDown network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" successfully" May 14 23:54:31.002600 containerd[1476]: time="2025-05-14T23:54:31.002590260Z" level=info msg="StopPodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" returns successfully" May 14 23:54:31.003039 containerd[1476]: time="2025-05-14T23:54:31.002997064Z" level=info msg="RemovePodSandbox for \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\"" May 14 23:54:31.003097 containerd[1476]: time="2025-05-14T23:54:31.003050856Z" level=info msg="Forcibly stopping sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\"" May 14 23:54:31.003226 containerd[1476]: time="2025-05-14T23:54:31.003181755Z" level=info msg="TearDown network for sandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" successfully" May 14 23:54:31.149571 containerd[1476]: time="2025-05-14T23:54:31.149476692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.149721 containerd[1476]: time="2025-05-14T23:54:31.149599095Z" level=info msg="RemovePodSandbox \"268e6454561b6cd1e34cf6cd2a2b9073b3347d38c2ae50171214f1068d86d773\" returns successfully" May 14 23:54:31.150132 containerd[1476]: time="2025-05-14T23:54:31.150108905Z" level=info msg="StopPodSandbox for \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\"" May 14 23:54:31.150275 containerd[1476]: time="2025-05-14T23:54:31.150252527Z" level=info msg="TearDown network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" successfully" May 14 23:54:31.150310 containerd[1476]: time="2025-05-14T23:54:31.150272716Z" level=info msg="StopPodSandbox for \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" returns successfully" May 14 23:54:31.150599 containerd[1476]: time="2025-05-14T23:54:31.150557578Z" level=info msg="RemovePodSandbox for \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\"" May 14 23:54:31.150599 containerd[1476]: time="2025-05-14T23:54:31.150585982Z" level=info msg="Forcibly stopping sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\"" May 14 23:54:31.150720 containerd[1476]: time="2025-05-14T23:54:31.150670753Z" level=info msg="TearDown network for sandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" successfully" May 14 23:54:31.318558 containerd[1476]: time="2025-05-14T23:54:31.318363960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.318558 containerd[1476]: time="2025-05-14T23:54:31.318488166Z" level=info msg="RemovePodSandbox \"09fd2766c1b27c1110f563f47b829324d1d894079423388d07b7b46946f8c17b\" returns successfully" May 14 23:54:31.319367 containerd[1476]: time="2025-05-14T23:54:31.319114507Z" level=info msg="StopPodSandbox for \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\"" May 14 23:54:31.319367 containerd[1476]: time="2025-05-14T23:54:31.319280644Z" level=info msg="TearDown network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\" successfully" May 14 23:54:31.319367 containerd[1476]: time="2025-05-14T23:54:31.319293558Z" level=info msg="StopPodSandbox for \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\" returns successfully" May 14 23:54:31.319687 containerd[1476]: time="2025-05-14T23:54:31.319617984Z" level=info msg="RemovePodSandbox for \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\"" May 14 23:54:31.319687 containerd[1476]: time="2025-05-14T23:54:31.319642571Z" level=info msg="Forcibly stopping sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\"" May 14 23:54:31.319779 containerd[1476]: time="2025-05-14T23:54:31.319731881Z" level=info msg="TearDown network for sandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\" successfully" May 14 23:54:31.408112 containerd[1476]: time="2025-05-14T23:54:31.408035695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.408295 containerd[1476]: time="2025-05-14T23:54:31.408130145Z" level=info msg="RemovePodSandbox \"61dac3c3631ee43e6c3e3f6d5c7b058475cf2102ce6f86760394043da4ef92e9\" returns successfully" May 14 23:54:31.408980 containerd[1476]: time="2025-05-14T23:54:31.408874781Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" May 14 23:54:31.409191 containerd[1476]: time="2025-05-14T23:54:31.409052939Z" level=info msg="TearDown network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" successfully" May 14 23:54:31.409191 containerd[1476]: time="2025-05-14T23:54:31.409112313Z" level=info msg="StopPodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" returns successfully" May 14 23:54:31.409501 containerd[1476]: time="2025-05-14T23:54:31.409473499Z" level=info msg="RemovePodSandbox for \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" May 14 23:54:31.409592 containerd[1476]: time="2025-05-14T23:54:31.409502735Z" level=info msg="Forcibly stopping sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\"" May 14 23:54:31.409655 containerd[1476]: time="2025-05-14T23:54:31.409605892Z" level=info msg="TearDown network for sandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" successfully" May 14 23:54:31.493263 containerd[1476]: time="2025-05-14T23:54:31.493160816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.493512 containerd[1476]: time="2025-05-14T23:54:31.493301813Z" level=info msg="RemovePodSandbox \"f33e4905296fbb90c952de0b4260a7869197385f32c1c9eb9736309b38de54e9\" returns successfully" May 14 23:54:31.494026 containerd[1476]: time="2025-05-14T23:54:31.493973601Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\"" May 14 23:54:31.494206 containerd[1476]: time="2025-05-14T23:54:31.494144606Z" level=info msg="TearDown network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" successfully" May 14 23:54:31.494206 containerd[1476]: time="2025-05-14T23:54:31.494171247Z" level=info msg="StopPodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" returns successfully" May 14 23:54:31.494784 containerd[1476]: time="2025-05-14T23:54:31.494741291Z" level=info msg="RemovePodSandbox for \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\"" May 14 23:54:31.494838 containerd[1476]: time="2025-05-14T23:54:31.494791436Z" level=info msg="Forcibly stopping sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\"" May 14 23:54:31.494961 containerd[1476]: time="2025-05-14T23:54:31.494911805Z" level=info msg="TearDown network for sandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" successfully" May 14 23:54:31.604986 containerd[1476]: time="2025-05-14T23:54:31.604813258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.604986 containerd[1476]: time="2025-05-14T23:54:31.604923677Z" level=info msg="RemovePodSandbox \"3fa3b1003a8b79389e551d954f0b3529a472a25717ba42e128b08e907973f4ea\" returns successfully" May 14 23:54:31.605544 containerd[1476]: time="2025-05-14T23:54:31.605492830Z" level=info msg="StopPodSandbox for \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\"" May 14 23:54:31.605831 containerd[1476]: time="2025-05-14T23:54:31.605618208Z" level=info msg="TearDown network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" successfully" May 14 23:54:31.605831 containerd[1476]: time="2025-05-14T23:54:31.605819552Z" level=info msg="StopPodSandbox for \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" returns successfully" May 14 23:54:31.606400 containerd[1476]: time="2025-05-14T23:54:31.606356181Z" level=info msg="RemovePodSandbox for \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\"" May 14 23:54:31.606496 containerd[1476]: time="2025-05-14T23:54:31.606403833Z" level=info msg="Forcibly stopping sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\"" May 14 23:54:31.606612 containerd[1476]: time="2025-05-14T23:54:31.606560571Z" level=info msg="TearDown network for sandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" successfully" May 14 23:54:31.688833 containerd[1476]: time="2025-05-14T23:54:31.688766229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.688833 containerd[1476]: time="2025-05-14T23:54:31.688830952Z" level=info msg="RemovePodSandbox \"a330a5be0695369a65f8b5b50efcb94668dbb316ccf670873233ab324a9cf62a\" returns successfully" May 14 23:54:31.689393 containerd[1476]: time="2025-05-14T23:54:31.689328499Z" level=info msg="StopPodSandbox for \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\"" May 14 23:54:31.689604 containerd[1476]: time="2025-05-14T23:54:31.689565940Z" level=info msg="TearDown network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\" successfully" May 14 23:54:31.689604 containerd[1476]: time="2025-05-14T23:54:31.689585056Z" level=info msg="StopPodSandbox for \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\" returns successfully" May 14 23:54:31.689987 containerd[1476]: time="2025-05-14T23:54:31.689957615Z" level=info msg="RemovePodSandbox for \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\"" May 14 23:54:31.690050 containerd[1476]: time="2025-05-14T23:54:31.689988954Z" level=info msg="Forcibly stopping sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\"" May 14 23:54:31.690122 containerd[1476]: time="2025-05-14T23:54:31.690067935Z" level=info msg="TearDown network for sandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\" successfully" May 14 23:54:31.758048 containerd[1476]: time="2025-05-14T23:54:31.757960776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.758209 containerd[1476]: time="2025-05-14T23:54:31.758058561Z" level=info msg="RemovePodSandbox \"3a1484484d44afdd2669f185a956b649fde8531338810e9aa7d30cdea29fafd3\" returns successfully" May 14 23:54:31.758747 containerd[1476]: time="2025-05-14T23:54:31.758701524Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" May 14 23:54:31.758863 containerd[1476]: time="2025-05-14T23:54:31.758841080Z" level=info msg="TearDown network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" successfully" May 14 23:54:31.758863 containerd[1476]: time="2025-05-14T23:54:31.758856779Z" level=info msg="StopPodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" returns successfully" May 14 23:54:31.759286 containerd[1476]: time="2025-05-14T23:54:31.759257271Z" level=info msg="RemovePodSandbox for \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" May 14 23:54:31.759286 containerd[1476]: time="2025-05-14T23:54:31.759282469Z" level=info msg="Forcibly stopping sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\"" May 14 23:54:31.759433 containerd[1476]: time="2025-05-14T23:54:31.759370656Z" level=info msg="TearDown network for sandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" successfully" May 14 23:54:31.838315 containerd[1476]: time="2025-05-14T23:54:31.838258175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.838513 containerd[1476]: time="2025-05-14T23:54:31.838359087Z" level=info msg="RemovePodSandbox \"c6980ceb1a80f1c499968711b2039cd46613df3ed8a462932bc9870dba7f89ec\" returns successfully" May 14 23:54:31.839033 containerd[1476]: time="2025-05-14T23:54:31.838970089Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\"" May 14 23:54:31.839179 containerd[1476]: time="2025-05-14T23:54:31.839146494Z" level=info msg="TearDown network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" successfully" May 14 23:54:31.839205 containerd[1476]: time="2025-05-14T23:54:31.839173536Z" level=info msg="StopPodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" returns successfully" May 14 23:54:31.839624 containerd[1476]: time="2025-05-14T23:54:31.839598804Z" level=info msg="RemovePodSandbox for \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\"" May 14 23:54:31.839691 containerd[1476]: time="2025-05-14T23:54:31.839627539Z" level=info msg="Forcibly stopping sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\"" May 14 23:54:31.839737 containerd[1476]: time="2025-05-14T23:54:31.839692262Z" level=info msg="TearDown network for sandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" successfully" May 14 23:54:31.934661 containerd[1476]: time="2025-05-14T23:54:31.934573715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:31.934897 containerd[1476]: time="2025-05-14T23:54:31.934671580Z" level=info msg="RemovePodSandbox \"5517df5d3ad135e3dc41f6cb51dce505f9f23f9aeb3c529d7190605dde52f2dc\" returns successfully" May 14 23:54:31.935359 containerd[1476]: time="2025-05-14T23:54:31.935327307Z" level=info msg="StopPodSandbox for \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\"" May 14 23:54:31.935507 containerd[1476]: time="2025-05-14T23:54:31.935484696Z" level=info msg="TearDown network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" successfully" May 14 23:54:31.935507 containerd[1476]: time="2025-05-14T23:54:31.935502521Z" level=info msg="StopPodSandbox for \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" returns successfully" May 14 23:54:31.935836 containerd[1476]: time="2025-05-14T23:54:31.935806309Z" level=info msg="RemovePodSandbox for \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\"" May 14 23:54:31.935836 containerd[1476]: time="2025-05-14T23:54:31.935839822Z" level=info msg="Forcibly stopping sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\"" May 14 23:54:31.936019 containerd[1476]: time="2025-05-14T23:54:31.935940754Z" level=info msg="TearDown network for sandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" successfully" May 14 23:54:32.005646 containerd[1476]: time="2025-05-14T23:54:32.005566478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:32.006092 containerd[1476]: time="2025-05-14T23:54:32.005678031Z" level=info msg="RemovePodSandbox \"459650d24e79534c698ee1049fadba62cda28dbecdff15bfbd2c4af39e8fc2b2\" returns successfully" May 14 23:54:32.006398 containerd[1476]: time="2025-05-14T23:54:32.006366038Z" level=info msg="StopPodSandbox for \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\"" May 14 23:54:32.006566 containerd[1476]: time="2025-05-14T23:54:32.006537745Z" level=info msg="TearDown network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\" successfully" May 14 23:54:32.006566 containerd[1476]: time="2025-05-14T23:54:32.006556761Z" level=info msg="StopPodSandbox for \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\" returns successfully" May 14 23:54:32.007002 containerd[1476]: time="2025-05-14T23:54:32.006967892Z" level=info msg="RemovePodSandbox for \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\"" May 14 23:54:32.007035 containerd[1476]: time="2025-05-14T23:54:32.007017948Z" level=info msg="Forcibly stopping sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\"" May 14 23:54:32.007187 containerd[1476]: time="2025-05-14T23:54:32.007123708Z" level=info msg="TearDown network for sandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\" successfully" May 14 23:54:32.019501 containerd[1476]: time="2025-05-14T23:54:32.019359290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:54:32.019501 containerd[1476]: time="2025-05-14T23:54:32.019484177Z" level=info msg="RemovePodSandbox \"57c35071fe58ac096b0c967850d8252b2174dadf77f55a63371cff8c41b57454\" returns successfully" May 14 23:54:35.353572 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:57184.service - OpenSSH per-connection server daemon (10.0.0.1:57184). May 14 23:54:35.418837 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 57184 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:35.420844 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:35.425696 systemd-logind[1460]: New session 17 of user core. May 14 23:54:35.433539 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:54:35.563955 sshd[5619]: Connection closed by 10.0.0.1 port 57184 May 14 23:54:35.564624 sshd-session[5617]: pam_unix(sshd:session): session closed for user core May 14 23:54:35.579133 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:57184.service: Deactivated successfully. May 14 23:54:35.581998 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:54:35.584262 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. May 14 23:54:35.590961 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:57190.service - OpenSSH per-connection server daemon (10.0.0.1:57190). May 14 23:54:35.592666 systemd-logind[1460]: Removed session 17. May 14 23:54:35.633445 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 57190 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:35.635310 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:35.640937 systemd-logind[1460]: New session 18 of user core. May 14 23:54:35.651623 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:54:36.033229 sshd[5634]: Connection closed by 10.0.0.1 port 57190 May 14 23:54:36.034257 sshd-session[5631]: pam_unix(sshd:session): session closed for user core May 14 23:54:36.044992 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:57190.service: Deactivated successfully. May 14 23:54:36.047727 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:54:36.048639 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. May 14 23:54:36.054889 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:57194.service - OpenSSH per-connection server daemon (10.0.0.1:57194). May 14 23:54:36.055701 systemd-logind[1460]: Removed session 18. May 14 23:54:36.106603 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 57194 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:36.109094 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:36.115482 systemd-logind[1460]: New session 19 of user core. May 14 23:54:36.124662 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:54:37.055821 sshd[5648]: Connection closed by 10.0.0.1 port 57194 May 14 23:54:37.057474 sshd-session[5645]: pam_unix(sshd:session): session closed for user core May 14 23:54:37.070071 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:57194.service: Deactivated successfully. May 14 23:54:37.075156 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:54:37.077223 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. May 14 23:54:37.086888 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:57206.service - OpenSSH per-connection server daemon (10.0.0.1:57206). May 14 23:54:37.088921 systemd-logind[1460]: Removed session 19. May 14 23:54:37.133046 sshd[5667]: Accepted publickey for core from 10.0.0.1 port 57206 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:37.135725 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:37.142490 systemd-logind[1460]: New session 20 of user core. May 14 23:54:37.152667 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:54:37.491790 sshd[5670]: Connection closed by 10.0.0.1 port 57206 May 14 23:54:37.492218 sshd-session[5667]: pam_unix(sshd:session): session closed for user core May 14 23:54:37.506281 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:57206.service: Deactivated successfully. May 14 23:54:37.508620 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:54:37.510466 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. May 14 23:54:37.517836 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:57208.service - OpenSSH per-connection server daemon (10.0.0.1:57208). May 14 23:54:37.519147 systemd-logind[1460]: Removed session 20. May 14 23:54:37.562702 sshd[5681]: Accepted publickey for core from 10.0.0.1 port 57208 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:37.565463 sshd-session[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:37.571911 systemd-logind[1460]: New session 21 of user core. May 14 23:54:37.585746 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:54:37.713662 sshd[5684]: Connection closed by 10.0.0.1 port 57208 May 14 23:54:37.714048 sshd-session[5681]: pam_unix(sshd:session): session closed for user core May 14 23:54:37.718078 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:57208.service: Deactivated successfully. May 14 23:54:37.720348 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:54:37.721346 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. May 14 23:54:37.722818 systemd-logind[1460]: Removed session 21. May 14 23:54:39.763245 kubelet[2635]: I0514 23:54:39.763189 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:42.728860 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:57212.service - OpenSSH per-connection server daemon (10.0.0.1:57212). May 14 23:54:42.795989 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 57212 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:42.798465 sshd-session[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:42.805863 systemd-logind[1460]: New session 22 of user core. May 14 23:54:42.811754 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:54:42.956319 sshd[5703]: Connection closed by 10.0.0.1 port 57212 May 14 23:54:42.956843 sshd-session[5701]: pam_unix(sshd:session): session closed for user core May 14 23:54:42.962576 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:57212.service: Deactivated successfully. May 14 23:54:42.965158 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:54:42.966079 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. May 14 23:54:42.967159 systemd-logind[1460]: Removed session 22. May 14 23:54:47.975838 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:35544.service - OpenSSH per-connection server daemon (10.0.0.1:35544). May 14 23:54:48.034259 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 35544 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:48.036117 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:48.042052 systemd-logind[1460]: New session 23 of user core. May 14 23:54:48.051562 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:54:48.177745 sshd[5725]: Connection closed by 10.0.0.1 port 35544 May 14 23:54:48.178358 sshd-session[5723]: pam_unix(sshd:session): session closed for user core May 14 23:54:48.182834 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:35544.service: Deactivated successfully. May 14 23:54:48.185610 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:54:48.186401 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. May 14 23:54:48.187410 systemd-logind[1460]: Removed session 23. May 14 23:54:52.697053 systemd[1]: run-containerd-runc-k8s.io-099df836a3ff2566b361d43c20bf9149f418f115e201fb44f86f5ddeed8841fd-runc.KirBC6.mount: Deactivated successfully. May 14 23:54:53.136579 kernel: hrtimer: interrupt took 8931400 ns May 14 23:54:53.213439 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:35556.service - OpenSSH per-connection server daemon (10.0.0.1:35556). May 14 23:54:53.302283 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 35556 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:53.303387 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:53.325522 systemd-logind[1460]: New session 24 of user core. May 14 23:54:53.332035 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:54:53.622877 sshd[5780]: Connection closed by 10.0.0.1 port 35556 May 14 23:54:53.624446 sshd-session[5778]: pam_unix(sshd:session): session closed for user core May 14 23:54:53.636454 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:35556.service: Deactivated successfully. May 14 23:54:53.645888 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:54:53.651321 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. May 14 23:54:53.653057 systemd-logind[1460]: Removed session 24. May 14 23:54:58.687928 systemd[1]: Started sshd@24-10.0.0.25:22-10.0.0.1:51714.service - OpenSSH per-connection server daemon (10.0.0.1:51714). May 14 23:54:58.758158 sshd[5795]: Accepted publickey for core from 10.0.0.1 port 51714 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:54:58.759899 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:58.789270 systemd-logind[1460]: New session 25 of user core. May 14 23:54:58.808922 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:54:59.007655 sshd[5797]: Connection closed by 10.0.0.1 port 51714 May 14 23:54:59.007772 sshd-session[5795]: pam_unix(sshd:session): session closed for user core May 14 23:54:59.014923 systemd[1]: sshd@24-10.0.0.25:22-10.0.0.1:51714.service: Deactivated successfully. May 14 23:54:59.021612 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:54:59.027390 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. May 14 23:54:59.030541 systemd-logind[1460]: Removed session 25. May 14 23:55:04.024253 systemd[1]: Started sshd@25-10.0.0.25:22-10.0.0.1:42392.service - OpenSSH per-connection server daemon (10.0.0.1:42392). May 14 23:55:04.083501 sshd[5810]: Accepted publickey for core from 10.0.0.1 port 42392 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:55:04.085174 sshd-session[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:04.089797 systemd-logind[1460]: New session 26 of user core. May 14 23:55:04.096631 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:55:04.230855 sshd[5814]: Connection closed by 10.0.0.1 port 42392 May 14 23:55:04.231705 sshd-session[5810]: pam_unix(sshd:session): session closed for user core May 14 23:55:04.235636 systemd[1]: sshd@25-10.0.0.25:22-10.0.0.1:42392.service: Deactivated successfully. May 14 23:55:04.237871 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:55:04.238645 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. May 14 23:55:04.239669 systemd-logind[1460]: Removed session 26. May 14 23:55:09.245694 systemd[1]: Started sshd@26-10.0.0.25:22-10.0.0.1:42398.service - OpenSSH per-connection server daemon (10.0.0.1:42398). May 14 23:55:09.321100 sshd[5847]: Accepted publickey for core from 10.0.0.1 port 42398 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 14 23:55:09.322251 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:09.327650 systemd-logind[1460]: New session 27 of user core. May 14 23:55:09.338675 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:55:09.470934 sshd[5849]: Connection closed by 10.0.0.1 port 42398 May 14 23:55:09.472739 sshd-session[5847]: pam_unix(sshd:session): session closed for user core May 14 23:55:09.476726 systemd[1]: sshd@26-10.0.0.25:22-10.0.0.1:42398.service: Deactivated successfully. May 14 23:55:09.478748 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:55:09.480738 systemd-logind[1460]: Session 27 logged out. Waiting for processes to exit. May 14 23:55:09.481707 systemd-logind[1460]: Removed session 27.