May 8 00:07:45.059105 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:07:45.059129 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:07:45.059141 kernel: BIOS-provided physical RAM map: May 8 00:07:45.059149 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 00:07:45.059155 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 00:07:45.059162 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:07:45.059169 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 8 00:07:45.059176 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 8 00:07:45.059183 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:07:45.059192 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:07:45.059199 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:07:45.059206 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:07:45.059216 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:07:45.059223 kernel: NX (Execute Disable) protection: active May 8 00:07:45.059231 kernel: APIC: Static calls initialized May 8 00:07:45.059244 kernel: SMBIOS 2.8 present. May 8 00:07:45.059252 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 8 00:07:45.059259 kernel: Hypervisor detected: KVM May 8 00:07:45.059266 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:07:45.059273 kernel: kvm-clock: using sched offset of 3222428534 cycles May 8 00:07:45.059280 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:07:45.059288 kernel: tsc: Detected 2794.748 MHz processor May 8 00:07:45.059296 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:07:45.059303 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:07:45.059311 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 8 00:07:45.059321 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 00:07:45.059340 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:07:45.059351 kernel: Using GB pages for direct mapping May 8 00:07:45.059359 kernel: ACPI: Early table checksum verification disabled May 8 00:07:45.059366 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 8 00:07:45.059373 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:07:45.059381 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:07:45.059388 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:07:45.059395 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 8 00:07:45.059407 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:07:45.059415 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:07:45.059422 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:07:45.059429 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:07:45.059437 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 8 00:07:45.059444 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 8 00:07:45.059455 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 8 00:07:45.059465 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 8 00:07:45.059477 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 8 00:07:45.059492 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 8 00:07:45.059500 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 8 00:07:45.059510 kernel: No NUMA configuration found May 8 00:07:45.059518 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 8 00:07:45.059526 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 8 00:07:45.059537 kernel: Zone ranges: May 8 00:07:45.059545 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:07:45.059576 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 8 00:07:45.059584 kernel: Normal empty May 8 00:07:45.059592 kernel: Movable zone start for each node May 8 00:07:45.059600 kernel: Early memory node ranges May 8 00:07:45.059607 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:07:45.059617 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 8 00:07:45.059625 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 8 00:07:45.059636 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:07:45.059660 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:07:45.059675 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 8 00:07:45.059688 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:07:45.059696 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:07:45.059703 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:07:45.059711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:07:45.059718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:07:45.059726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:07:45.059737 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:07:45.059745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:07:45.059752 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:07:45.059760 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:07:45.059767 kernel: TSC deadline timer available May 8 00:07:45.059775 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:07:45.059783 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:07:45.059790 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:07:45.059800 kernel: kvm-guest: setup PV sched yield May 8 00:07:45.059808 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:07:45.059818 kernel: Booting paravirtualized kernel on KVM May 8 00:07:45.059826 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:07:45.059834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:07:45.059842 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:07:45.059849 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:07:45.059856 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:07:45.059864 kernel: kvm-guest: PV spinlocks enabled May 8 00:07:45.059871 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:07:45.059880 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:07:45.059891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:07:45.059898 kernel: random: crng init done May 8 00:07:45.059906 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:07:45.059914 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:07:45.059921 kernel: Fallback order for Node 0: 0 May 8 00:07:45.059929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 8 00:07:45.059936 kernel: Policy zone: DMA32 May 8 00:07:45.059944 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:07:45.059954 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 138948K reserved, 0K cma-reserved) May 8 00:07:45.059962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:07:45.059969 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:07:45.059977 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:07:45.059984 kernel: Dynamic Preempt: voluntary May 8 00:07:45.059992 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:07:45.060000 kernel: rcu: RCU event tracing is enabled. May 8 00:07:45.060008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:07:45.060016 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:07:45.060026 kernel: Rude variant of Tasks RCU enabled. May 8 00:07:45.060034 kernel: Tracing variant of Tasks RCU enabled. May 8 00:07:45.060041 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:07:45.060051 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:07:45.060059 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:07:45.060067 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:07:45.060074 kernel: Console: colour VGA+ 80x25 May 8 00:07:45.060082 kernel: printk: console [ttyS0] enabled May 8 00:07:45.060089 kernel: ACPI: Core revision 20230628 May 8 00:07:45.060100 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:07:45.060107 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:07:45.060115 kernel: x2apic enabled May 8 00:07:45.060122 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:07:45.060130 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:07:45.060138 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:07:45.060146 kernel: kvm-guest: setup PV IPIs May 8 00:07:45.060164 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:07:45.060172 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:07:45.060180 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:07:45.060188 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:07:45.060195 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:07:45.060206 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:07:45.060214 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:07:45.060221 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:07:45.060230 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:07:45.060240 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:07:45.060248 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:07:45.060258 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:07:45.060266 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:07:45.060274 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:07:45.060282 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:07:45.060291 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:07:45.060299 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:07:45.060307 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:07:45.060317 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:07:45.060325 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:07:45.060333 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:07:45.060341 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:07:45.060349 kernel: Freeing SMP alternatives memory: 32K May 8 00:07:45.060357 kernel: pid_max: default: 32768 minimum: 301 May 8 00:07:45.060364 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:07:45.060372 kernel: landlock: Up and running. May 8 00:07:45.060380 kernel: SELinux: Initializing. May 8 00:07:45.060391 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:07:45.060399 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:07:45.060407 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:07:45.060415 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:07:45.060423 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:07:45.060431 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:07:45.060439 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:07:45.060449 kernel: ... version: 0 May 8 00:07:45.060460 kernel: ... bit width: 48 May 8 00:07:45.060467 kernel: ... generic registers: 6 May 8 00:07:45.060475 kernel: ... value mask: 0000ffffffffffff May 8 00:07:45.060483 kernel: ... max period: 00007fffffffffff May 8 00:07:45.060491 kernel: ... fixed-purpose events: 0 May 8 00:07:45.060499 kernel: ... event mask: 000000000000003f May 8 00:07:45.060507 kernel: signal: max sigframe size: 1776 May 8 00:07:45.060514 kernel: rcu: Hierarchical SRCU implementation. May 8 00:07:45.060522 kernel: rcu: Max phase no-delay instances is 400. May 8 00:07:45.060530 kernel: smp: Bringing up secondary CPUs ... May 8 00:07:45.060541 kernel: smpboot: x86: Booting SMP configuration: May 8 00:07:45.060548 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:07:45.060569 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:07:45.060577 kernel: smpboot: Max logical packages: 1 May 8 00:07:45.060585 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:07:45.060593 kernel: devtmpfs: initialized May 8 00:07:45.060601 kernel: x86/mm: Memory block size: 128MB May 8 00:07:45.060609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:07:45.060616 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:07:45.060628 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:07:45.060635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:07:45.060643 kernel: audit: initializing netlink subsys (disabled) May 8 00:07:45.060651 kernel: audit: type=2000 audit(1746662863.921:1): state=initialized audit_enabled=0 res=1 May 8 00:07:45.060665 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:07:45.060681 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:07:45.060689 kernel: cpuidle: using governor menu May 8 00:07:45.060697 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:07:45.060705 kernel: dca service started, version 1.12.1 May 8 00:07:45.060717 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:07:45.060725 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:07:45.060733 kernel: PCI: Using configuration type 1 for base access May 8 00:07:45.060747 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:07:45.060755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:07:45.060763 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:07:45.060771 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:07:45.060779 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:07:45.060789 kernel: ACPI: Added _OSI(Module Device) May 8 00:07:45.060800 kernel: ACPI: Added _OSI(Processor Device) May 8 00:07:45.060808 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:07:45.060816 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:07:45.060824 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:07:45.060831 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:07:45.060839 kernel: ACPI: Interpreter enabled May 8 00:07:45.060847 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:07:45.060855 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:07:45.060863 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:07:45.060873 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:07:45.060881 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:07:45.060889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:07:45.061150 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:07:45.061361 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:07:45.061696 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:07:45.061721 kernel: PCI host bridge to bus 0000:00 May 8 00:07:45.061947 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:07:45.062146 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:07:45.062311 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:07:45.062470 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:07:45.062650 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:07:45.062823 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 8 00:07:45.063065 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:07:45.063463 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:07:45.063698 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:07:45.063892 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:07:45.064086 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:07:45.064381 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:07:45.064752 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:07:45.065013 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:07:45.065168 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 8 00:07:45.065354 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:07:45.065634 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:07:45.065802 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:07:45.065939 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 8 00:07:45.066071 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:07:45.066211 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:07:45.066360 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:07:45.066494 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 8 00:07:45.066661 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 8 00:07:45.066819 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 8 00:07:45.067017 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:07:45.067188 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:07:45.067328 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:07:45.067474 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:07:45.067636 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 8 00:07:45.067783 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 8 00:07:45.067949 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:07:45.068084 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:07:45.068095 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:07:45.068109 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:07:45.068117 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:07:45.068125 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:07:45.068133 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:07:45.068141 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:07:45.068149 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:07:45.068157 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:07:45.068165 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:07:45.068173 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:07:45.068184 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:07:45.068193 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:07:45.068206 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:07:45.068223 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:07:45.068241 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:07:45.068253 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:07:45.068261 kernel: iommu: Default domain type: Translated May 8 00:07:45.068269 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:07:45.068277 kernel: PCI: Using ACPI for IRQ routing May 8 00:07:45.068310 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:07:45.068330 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 00:07:45.068338 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 8 00:07:45.068496 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:07:45.068703 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:07:45.068865 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:07:45.068879 kernel: vgaarb: loaded May 8 00:07:45.068887 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:07:45.068901 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:07:45.068910 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:07:45.068918 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:07:45.068927 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:07:45.068935 kernel: pnp: PnP ACPI init May 8 00:07:45.069095 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:07:45.069109 kernel: pnp: PnP ACPI: found 6 devices May 8 00:07:45.069117 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:07:45.069128 kernel: NET: Registered PF_INET protocol family May 8 00:07:45.069139 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:07:45.069149 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:07:45.069159 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:07:45.069170 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:07:45.069180 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:07:45.069192 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:07:45.069202 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:07:45.069213 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:07:45.069228 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:07:45.069238 kernel: NET: Registered PF_XDP protocol family May 8 00:07:45.069391 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:07:45.069529 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:07:45.069677 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:07:45.069802 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:07:45.069924 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:07:45.070046 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 8 00:07:45.070062 kernel: PCI: CLS 0 bytes, default 64 May 8 00:07:45.070070 kernel: Initialise system trusted keyrings May 8 00:07:45.070078 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:07:45.070087 kernel: Key type asymmetric registered May 8 00:07:45.070094 kernel: Asymmetric key parser 'x509' registered May 8 00:07:45.070102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:07:45.070110 kernel: io scheduler mq-deadline registered May 8 00:07:45.070118 kernel: io scheduler kyber registered May 8 00:07:45.070126 kernel: io scheduler bfq registered May 8 00:07:45.070134 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:07:45.070145 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:07:45.070154 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:07:45.070162 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:07:45.070170 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:07:45.070178 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:07:45.070187 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:07:45.070195 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:07:45.070203 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:07:45.070349 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:07:45.070366 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:07:45.070497 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:07:45.070640 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:07:44 UTC (1746662864) May 8 00:07:45.070779 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:07:45.070790 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:07:45.070798 kernel: NET: Registered PF_INET6 protocol family May 8 00:07:45.070806 kernel: Segment Routing with IPv6 May 8 00:07:45.070819 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:07:45.070827 kernel: NET: Registered PF_PACKET protocol family May 8 00:07:45.070835 kernel: Key type dns_resolver registered May 8 00:07:45.070843 kernel: IPI shorthand broadcast: enabled May 8 00:07:45.070851 kernel: sched_clock: Marking stable (853002868, 215546293)->(1357506294, -288957133) May 8 00:07:45.070859 kernel: registered taskstats version 1 May 8 00:07:45.070867 kernel: Loading compiled-in X.509 certificates May 8 00:07:45.070876 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:07:45.070883 kernel: Key type .fscrypt registered May 8 00:07:45.070891 kernel: Key type fscrypt-provisioning registered May 8 00:07:45.070902 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:07:45.070910 kernel: ima: Allocated hash algorithm: sha1 May 8 00:07:45.070918 kernel: ima: No architecture policies found May 8 00:07:45.070926 kernel: clk: Disabling unused clocks May 8 00:07:45.070934 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:07:45.070942 kernel: Write protecting the kernel read-only data: 38912k May 8 00:07:45.070950 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:07:45.070958 kernel: Run /init as init process May 8 00:07:45.070969 kernel: with arguments: May 8 00:07:45.070977 kernel: /init May 8 00:07:45.070984 kernel: with environment: May 8 00:07:45.070992 kernel: HOME=/ May 8 00:07:45.071001 kernel: TERM=linux May 8 00:07:45.071012 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:07:45.071025 systemd[1]: Successfully made /usr/ read-only. May 8 00:07:45.071040 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:07:45.071056 systemd[1]: Detected virtualization kvm. May 8 00:07:45.071065 systemd[1]: Detected architecture x86-64. May 8 00:07:45.071074 systemd[1]: Running in initrd. May 8 00:07:45.071086 systemd[1]: No hostname configured, using default hostname. May 8 00:07:45.071098 systemd[1]: Hostname set to . May 8 00:07:45.071107 systemd[1]: Initializing machine ID from VM UUID. May 8 00:07:45.071116 systemd[1]: Queued start job for default target initrd.target. May 8 00:07:45.071124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:07:45.071137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:07:45.071159 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:07:45.071170 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:07:45.071180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:07:45.071190 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:07:45.071203 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:07:45.071212 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:07:45.071221 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:07:45.071230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:07:45.071239 systemd[1]: Reached target paths.target - Path Units. May 8 00:07:45.071247 systemd[1]: Reached target slices.target - Slice Units. May 8 00:07:45.071256 systemd[1]: Reached target swap.target - Swaps. May 8 00:07:45.071265 systemd[1]: Reached target timers.target - Timer Units. May 8 00:07:45.071276 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:07:45.071285 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:07:45.071294 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:07:45.071303 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:07:45.071311 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:07:45.071320 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:07:45.071329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:07:45.071338 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:07:45.071347 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:07:45.071358 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:07:45.071366 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:07:45.071375 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:07:45.071384 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:07:45.071392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:07:45.071401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:07:45.071410 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:07:45.071419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:07:45.071431 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:07:45.071468 systemd-journald[194]: Collecting audit messages is disabled. May 8 00:07:45.071494 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:07:45.071504 systemd-journald[194]: Journal started May 8 00:07:45.071526 systemd-journald[194]: Runtime Journal (/run/log/journal/392a432c0fc0465d96451ed1cbfdc8d5) is 6M, max 48.4M, 42.3M free. May 8 00:07:45.050016 systemd-modules-load[195]: Inserted module 'overlay' May 8 00:07:45.089718 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:07:45.089753 kernel: Bridge firewalling registered May 8 00:07:45.080724 systemd-modules-load[195]: Inserted module 'br_netfilter' May 8 00:07:45.092735 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:07:45.093584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:07:45.096096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:07:45.098777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:07:45.114929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:07:45.117744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:07:45.119143 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:07:45.123748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:07:45.133495 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:07:45.140462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:07:45.145314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:07:45.157893 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:07:45.160465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:07:45.165335 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:07:45.172980 dracut-cmdline[228]: dracut-dracut-053 May 8 00:07:45.178205 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:07:45.215596 systemd-resolved[236]: Positive Trust Anchors: May 8 00:07:45.215621 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:07:45.215661 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:07:45.219099 systemd-resolved[236]: Defaulting to hostname 'linux'. May 8 00:07:45.220769 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:07:45.226935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:07:45.294607 kernel: SCSI subsystem initialized May 8 00:07:45.306593 kernel: Loading iSCSI transport class v2.0-870. May 8 00:07:45.319626 kernel: iscsi: registered transport (tcp) May 8 00:07:45.345607 kernel: iscsi: registered transport (qla4xxx) May 8 00:07:45.345708 kernel: QLogic iSCSI HBA Driver May 8 00:07:45.408268 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:07:45.422726 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:07:45.448023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:07:45.448071 kernel: device-mapper: uevent: version 1.0.3 May 8 00:07:45.449121 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:07:45.493598 kernel: raid6: avx2x4 gen() 29006 MB/s May 8 00:07:45.510602 kernel: raid6: avx2x2 gen() 30333 MB/s May 8 00:07:45.527985 kernel: raid6: avx2x1 gen() 23500 MB/s May 8 00:07:45.528063 kernel: raid6: using algorithm avx2x2 gen() 30333 MB/s May 8 00:07:45.546089 kernel: raid6: .... xor() 15109 MB/s, rmw enabled May 8 00:07:45.546198 kernel: raid6: using avx2x2 recovery algorithm May 8 00:07:45.569610 kernel: xor: automatically using best checksumming function avx May 8 00:07:45.725617 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:07:45.741019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:07:45.804716 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:07:45.821354 systemd-udevd[418]: Using default interface naming scheme 'v255'. May 8 00:07:45.826994 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:07:45.882716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:07:45.899102 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation May 8 00:07:45.934942 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:07:45.955831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:07:46.035306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:07:46.049573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:07:46.065498 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:07:46.066605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:07:46.069109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:07:46.070348 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:07:46.085056 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:07:46.137987 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:07:46.138148 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:07:46.138161 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:07:46.138178 kernel: GPT:9289727 != 19775487 May 8 00:07:46.138189 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:07:46.138200 kernel: GPT:9289727 != 19775487 May 8 00:07:46.138210 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:07:46.138220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:07:46.085322 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:07:46.136475 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:07:46.136710 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:07:46.170528 kernel: libata version 3.00 loaded. May 8 00:07:46.165250 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:07:46.169577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:07:46.169857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:07:46.175284 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:07:46.199616 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:07:46.243001 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:07:46.243025 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:07:46.243235 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:07:46.243429 kernel: scsi host0: ahci May 8 00:07:46.243668 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:07:46.243700 kernel: AES CTR mode by8 optimization enabled May 8 00:07:46.243715 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (480) May 8 00:07:46.243731 kernel: scsi host1: ahci May 8 00:07:46.243990 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (475) May 8 00:07:46.244008 kernel: scsi host2: ahci May 8 00:07:46.244203 kernel: scsi host3: ahci May 8 00:07:46.244393 kernel: scsi host4: ahci May 8 00:07:46.244714 kernel: scsi host5: ahci May 8 00:07:46.244913 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 8 00:07:46.244930 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 8 00:07:46.244945 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 8 00:07:46.244959 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 8 00:07:46.244974 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 8 00:07:46.244989 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 8 00:07:46.211976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:07:46.213908 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:07:46.215019 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:07:46.258728 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:07:46.295655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:07:46.313944 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:07:46.337502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:07:46.365472 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:07:46.369158 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:07:46.384758 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:07:46.388504 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:07:46.400574 disk-uuid[560]: Primary Header is updated. May 8 00:07:46.400574 disk-uuid[560]: Secondary Entries is updated. May 8 00:07:46.400574 disk-uuid[560]: Secondary Header is updated. May 8 00:07:46.406642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:07:46.412585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:07:46.414101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:07:46.551587 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:07:46.551662 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:07:46.552598 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:07:46.553575 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:07:46.553593 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:07:46.554845 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:07:46.554862 kernel: ata3.00: applying bridge limits May 8 00:07:46.555576 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:07:46.556580 kernel: ata3.00: configured for UDMA/100 May 8 00:07:46.557584 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:07:46.614595 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:07:46.628205 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:07:46.628221 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:07:47.430104 disk-uuid[561]: The operation has completed successfully. May 8 00:07:47.431652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:07:47.460249 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:07:47.460377 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:07:47.518744 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:07:47.522746 sh[596]: Success May 8 00:07:47.536600 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:07:47.579377 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:07:47.593959 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:07:47.597533 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:07:47.611643 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:07:47.611715 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:07:47.611727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:07:47.613832 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:07:47.613856 kernel: BTRFS info (device dm-0): using free space tree May 8 00:07:47.620086 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:07:47.622145 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:07:47.632913 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:07:47.635205 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:07:47.654054 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:07:47.654121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:07:47.654133 kernel: BTRFS info (device vda6): using free space tree May 8 00:07:47.657597 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:07:47.662617 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:07:47.669387 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:07:47.675853 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:07:47.737446 ignition[685]: Ignition 2.20.0 May 8 00:07:47.737459 ignition[685]: Stage: fetch-offline May 8 00:07:47.737505 ignition[685]: no configs at "/usr/lib/ignition/base.d" May 8 00:07:47.737516 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:07:47.737640 ignition[685]: parsed url from cmdline: "" May 8 00:07:47.737644 ignition[685]: no config URL provided May 8 00:07:47.737652 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:07:47.737662 ignition[685]: no config at "/usr/lib/ignition/user.ign" May 8 00:07:47.737690 ignition[685]: op(1): [started] loading QEMU firmware config module May 8 00:07:47.737696 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:07:47.747498 ignition[685]: op(1): [finished] loading QEMU firmware config module May 8 00:07:47.780065 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:07:47.788735 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:07:47.793814 ignition[685]: parsing config with SHA512: d6004760e870dddf59b03fd592dd4163025fda4c1cfaa495e2847077310f158de2e181a2ac6c3fe39120b5a8e7bc752a099bdd4e0c6a7c188657b1a7034a0bd6 May 8 00:07:47.803029 unknown[685]: fetched base config from "system" May 8 00:07:47.804051 unknown[685]: fetched user config from "qemu" May 8 00:07:47.804437 ignition[685]: fetch-offline: fetch-offline passed May 8 00:07:47.807374 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:07:47.804510 ignition[685]: Ignition finished successfully May 8 00:07:47.823694 systemd-networkd[782]: lo: Link UP May 8 00:07:47.823708 systemd-networkd[782]: lo: Gained carrier May 8 00:07:47.825713 systemd-networkd[782]: Enumeration completed May 8 00:07:47.825863 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:07:47.826125 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:07:47.826131 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:07:47.826982 systemd-networkd[782]: eth0: Link UP May 8 00:07:47.826986 systemd-networkd[782]: eth0: Gained carrier May 8 00:07:47.826995 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:07:47.828256 systemd[1]: Reached target network.target - Network. May 8 00:07:47.830188 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:07:47.837728 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:07:47.847635 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:07:47.883720 ignition[786]: Ignition 2.20.0 May 8 00:07:47.883732 ignition[786]: Stage: kargs May 8 00:07:47.883899 ignition[786]: no configs at "/usr/lib/ignition/base.d" May 8 00:07:47.883912 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:07:47.884733 ignition[786]: kargs: kargs passed May 8 00:07:47.884784 ignition[786]: Ignition finished successfully May 8 00:07:47.888938 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:07:47.897757 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:07:47.912739 ignition[796]: Ignition 2.20.0 May 8 00:07:47.912753 ignition[796]: Stage: disks May 8 00:07:47.912965 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 8 00:07:47.912980 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:07:47.916379 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:07:47.913988 ignition[796]: disks: disks passed May 8 00:07:47.918470 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:07:47.914039 ignition[796]: Ignition finished successfully May 8 00:07:47.920642 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:07:47.922843 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:07:47.925198 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:07:47.925921 systemd[1]: Reached target basic.target - Basic System. May 8 00:07:47.934817 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:07:47.948591 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.79 May 8 00:07:47.948611 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. May 8 00:07:47.950143 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:07:47.957548 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:07:47.962818 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:07:48.052593 kernel: EXT4-fs (vda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:07:48.053757 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:07:48.056129 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:07:48.076778 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:07:48.080294 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:07:48.083250 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:07:48.083317 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:07:48.093316 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) May 8 00:07:48.093345 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:07:48.093361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:07:48.093375 kernel: BTRFS info (device vda6): using free space tree May 8 00:07:48.085606 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:07:48.095570 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:07:48.098459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:07:48.100489 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:07:48.118771 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:07:48.158301 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:07:48.163107 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 8 00:07:48.167437 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:07:48.173479 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:07:48.282320 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:07:48.295786 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:07:48.299937 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:07:48.306664 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:07:48.328594 ignition[929]: INFO : Ignition 2.20.0 May 8 00:07:48.329961 ignition[929]: INFO : Stage: mount May 8 00:07:48.329961 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:07:48.329961 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:07:48.329961 ignition[929]: INFO : mount: mount passed May 8 00:07:48.329961 ignition[929]: INFO : Ignition finished successfully May 8 00:07:48.336452 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:07:48.339320 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:07:48.364851 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:07:48.610921 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:07:48.652068 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:07:48.665599 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) May 8 00:07:48.668437 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:07:48.668464 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:07:48.668476 kernel: BTRFS info (device vda6): using free space tree May 8 00:07:48.704648 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:07:48.709364 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:07:48.743808 ignition[959]: INFO : Ignition 2.20.0 May 8 00:07:48.743808 ignition[959]: INFO : Stage: files May 8 00:07:48.745940 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:07:48.745940 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:07:48.745940 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 8 00:07:48.755276 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:07:48.755276 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:07:48.758415 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:07:48.760005 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:07:48.761598 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:07:48.760619 unknown[959]: wrote ssh authorized keys file for user: core May 8 00:07:48.764156 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:07:48.764156 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:07:48.975212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:07:49.264821 systemd-networkd[782]: eth0: Gained IPv6LL May 8 00:07:49.361195 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:07:49.361195 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:07:49.365094 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:07:49.365094 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:07:49.368676 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:07:49.370599 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:07:49.372400 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:07:49.374151 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:07:49.375977 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:07:49.377951 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:07:49.379862 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:07:49.381676 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:07:49.384233 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:07:49.386725 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:07:49.388889 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 00:07:49.886122 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:07:50.725754 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:07:50.725754 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:07:50.730845 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:07:50.757134 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:07:50.762809 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:07:50.764840 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:07:50.764840 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 8 00:07:50.764840 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:07:50.764840 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:07:50.764840 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:07:50.764840 ignition[959]: INFO : files: files passed May 8 00:07:50.764840 ignition[959]: INFO : Ignition finished successfully May 8 00:07:50.779086 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:07:50.785456 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:07:50.790514 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:07:50.795909 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:07:50.796067 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:07:50.804809 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:07:50.810106 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:07:50.810106 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:07:50.814304 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:07:50.818716 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:07:50.819599 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:07:50.836833 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:07:50.870746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:07:50.872130 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:07:50.875579 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:07:50.877707 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:07:50.879836 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:07:50.894848 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:07:50.911282 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:07:50.923693 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:07:50.936095 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:07:50.936445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:07:50.939100 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:07:50.939527 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:07:50.939693 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:07:50.940512 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:07:50.941122 systemd[1]: Stopped target basic.target - Basic System. May 8 00:07:50.941547 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:07:50.942103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:07:50.942480 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:07:50.943095 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:07:50.943462 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:07:50.944045 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:07:50.944453 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:07:50.945046 systemd[1]: Stopped target swap.target - Swaps. May 8 00:07:50.945398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:07:50.945606 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:07:50.970319 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:07:50.973008 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:07:50.975426 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:07:50.977978 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:07:50.979420 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:07:50.979632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:07:50.984756 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:07:50.984915 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:07:50.985455 systemd[1]: Stopped target paths.target - Path Units. May 8 00:07:50.988481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:07:50.990069 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:07:50.990975 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:07:50.991315 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:07:50.991901 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:07:50.992023 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:07:50.998323 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:07:50.998464 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:07:51.001058 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:07:51.001256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:07:51.003027 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:07:51.003175 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:07:51.023936 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:07:51.024377 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:07:51.024616 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:07:51.028939 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:07:51.030543 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:07:51.030916 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:07:51.032748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:07:51.032959 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:07:51.043766 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:07:51.045078 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:07:51.098055 ignition[1013]: INFO : Ignition 2.20.0 May 8 00:07:51.098055 ignition[1013]: INFO : Stage: umount May 8 00:07:51.100101 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:07:51.100101 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:07:51.103085 ignition[1013]: INFO : umount: umount passed May 8 00:07:51.104068 ignition[1013]: INFO : Ignition finished successfully May 8 00:07:51.106689 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:07:51.107843 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:07:51.110806 systemd[1]: Stopped target network.target - Network. May 8 00:07:51.112786 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:07:51.113805 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:07:51.116220 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:07:51.116280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:07:51.119674 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:07:51.119730 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:07:51.122691 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:07:51.122750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:07:51.126523 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:07:51.129104 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:07:51.133024 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:07:51.136485 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:07:51.136724 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:07:51.141622 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:07:51.141917 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:07:51.142081 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:07:51.147315 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:07:51.148436 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:07:51.148620 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:07:51.158995 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:07:51.161133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:07:51.162267 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:07:51.165457 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:07:51.166803 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:07:51.169764 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:07:51.170898 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:07:51.173181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:07:51.174235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:07:51.177115 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:07:51.181805 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:07:51.183133 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:07:51.200168 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:07:51.200455 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:07:51.203896 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:07:51.204050 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:07:51.208334 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:07:51.208423 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:07:51.210185 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:07:51.210244 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:07:51.212936 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:07:51.213017 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:07:51.215883 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:07:51.215954 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:07:51.218131 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:07:51.218209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:07:51.246831 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:07:51.248244 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:07:51.248323 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:07:51.251102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:07:51.251160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:07:51.254521 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:07:51.254614 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:07:51.259859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:07:51.259985 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:07:51.438900 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:07:51.439095 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:07:51.441454 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:07:51.453364 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:07:51.453507 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:07:51.470789 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:07:51.482918 systemd[1]: Switching root. May 8 00:07:51.524703 systemd-journald[194]: Journal stopped May 8 00:07:53.292894 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 8 00:07:53.292970 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:07:53.292990 kernel: SELinux: policy capability open_perms=1 May 8 00:07:53.293002 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:07:53.293020 kernel: SELinux: policy capability always_check_network=0 May 8 00:07:53.293038 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:07:53.293051 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:07:53.293063 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:07:53.293074 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:07:53.293086 kernel: audit: type=1403 audit(1746662872.330:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:07:53.293099 systemd[1]: Successfully loaded SELinux policy in 54.902ms. May 8 00:07:53.293126 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.386ms. May 8 00:07:53.293140 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:07:53.293164 systemd[1]: Detected virtualization kvm. May 8 00:07:53.293189 systemd[1]: Detected architecture x86-64. May 8 00:07:53.293206 systemd[1]: Detected first boot. May 8 00:07:53.293223 systemd[1]: Initializing machine ID from VM UUID. May 8 00:07:53.293243 zram_generator::config[1059]: No configuration found. May 8 00:07:53.293258 kernel: Guest personality initialized and is inactive May 8 00:07:53.293270 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:07:53.293281 kernel: Initialized host personality May 8 00:07:53.293300 kernel: NET: Registered PF_VSOCK protocol family May 8 00:07:53.293313 systemd[1]: Populated /etc with preset unit settings. May 8 00:07:53.293327 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:07:53.293340 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:07:53.293352 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:07:53.293371 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:07:53.293388 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:07:53.293405 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:07:53.293422 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:07:53.293477 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:07:53.293492 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:07:53.293508 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:07:53.293524 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:07:53.293538 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:07:53.293698 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:07:53.293715 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:07:53.293731 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:07:53.293755 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:07:53.293781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:07:53.293794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:07:53.293807 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:07:53.293820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:07:53.293832 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:07:53.293845 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:07:53.293860 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:07:53.293878 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:07:53.293891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:07:53.293904 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:07:53.293917 systemd[1]: Reached target slices.target - Slice Units. May 8 00:07:53.293929 systemd[1]: Reached target swap.target - Swaps. May 8 00:07:53.293942 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:07:53.293957 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:07:53.293974 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:07:53.293987 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:07:53.294006 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:07:53.294021 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:07:53.294036 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:07:53.294052 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:07:53.294072 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:07:53.294084 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:07:53.294097 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:53.294110 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:07:53.294122 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:07:53.294141 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:07:53.294154 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:07:53.294166 systemd[1]: Reached target machines.target - Containers. May 8 00:07:53.294179 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:07:53.294192 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:07:53.294205 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:07:53.294218 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:07:53.294230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:07:53.294248 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:07:53.294261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:07:53.294274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:07:53.294286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:07:53.294299 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:07:53.294313 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:07:53.294329 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:07:53.294346 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:07:53.294365 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:07:53.294388 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:07:53.294403 kernel: fuse: init (API version 7.39) May 8 00:07:53.294418 kernel: loop: module loaded May 8 00:07:53.294444 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:07:53.294462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:07:53.294477 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:07:53.294490 kernel: ACPI: bus type drm_connector registered May 8 00:07:53.294502 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:07:53.294515 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:07:53.294534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:07:53.294547 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:07:53.294574 systemd[1]: Stopped verity-setup.service. May 8 00:07:53.294609 systemd-journald[1137]: Collecting audit messages is disabled. May 8 00:07:53.294640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:53.294654 systemd-journald[1137]: Journal started May 8 00:07:53.294681 systemd-journald[1137]: Runtime Journal (/run/log/journal/392a432c0fc0465d96451ed1cbfdc8d5) is 6M, max 48.4M, 42.3M free. May 8 00:07:53.022646 systemd[1]: Queued start job for default target multi-user.target. May 8 00:07:53.035828 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:07:53.036403 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:07:53.299967 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:07:53.301370 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:07:53.302736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:07:53.304208 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:07:53.305510 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:07:53.306927 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:07:53.308276 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:07:53.309943 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:07:53.311749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:07:53.313788 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:07:53.314128 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:07:53.315940 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:07:53.316167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:07:53.317979 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:07:53.318210 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:07:53.319876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:07:53.320120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:07:53.322155 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:07:53.322534 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:07:53.324496 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:07:53.324815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:07:53.327776 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:07:53.330343 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:07:53.332294 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:07:53.334536 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:07:53.382521 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:07:53.391780 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:07:53.394769 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:07:53.396234 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:07:53.396272 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:07:53.398678 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:07:53.402350 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:07:53.406381 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:07:53.408278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:07:53.410583 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:07:53.414266 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:07:53.415748 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:07:53.421812 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:07:53.423308 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:07:53.433976 systemd-journald[1137]: Time spent on flushing to /var/log/journal/392a432c0fc0465d96451ed1cbfdc8d5 is 14.460ms for 964 entries. May 8 00:07:53.433976 systemd-journald[1137]: System Journal (/var/log/journal/392a432c0fc0465d96451ed1cbfdc8d5) is 8M, max 195.6M, 187.6M free. May 8 00:07:53.723751 systemd-journald[1137]: Received client request to flush runtime journal. May 8 00:07:53.723820 kernel: loop0: detected capacity change from 0 to 147912 May 8 00:07:53.723898 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:07:53.723994 kernel: loop1: detected capacity change from 0 to 205544 May 8 00:07:53.429724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:07:53.432301 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:07:53.436113 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:07:53.440620 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:07:53.442030 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:07:53.443644 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:07:53.494851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:07:53.501030 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:07:53.534829 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:07:53.546978 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:07:53.576886 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:07:53.584858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:07:53.707232 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 8 00:07:53.707262 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 8 00:07:53.711842 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:07:53.715674 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:07:53.727815 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:07:53.730472 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:07:53.733105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:07:53.770608 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:07:53.861779 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:07:53.879580 kernel: loop3: detected capacity change from 0 to 147912 May 8 00:07:53.916584 kernel: loop4: detected capacity change from 0 to 205544 May 8 00:07:53.931603 kernel: loop5: detected capacity change from 0 to 138176 May 8 00:07:53.943488 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:07:53.945090 (sd-merge)[1203]: Merged extensions into '/usr'. May 8 00:07:53.979426 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:07:53.979450 systemd[1]: Reloading... May 8 00:07:54.100580 zram_generator::config[1232]: No configuration found. May 8 00:07:54.240456 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:07:54.276026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:07:54.351164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:07:54.351418 systemd[1]: Reloading finished in 371 ms. May 8 00:07:54.381416 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:07:54.383087 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:07:54.400499 systemd[1]: Starting ensure-sysext.service... May 8 00:07:54.403286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:07:54.416935 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 8 00:07:54.416956 systemd[1]: Reloading... May 8 00:07:54.539366 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:07:54.539730 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:07:54.540794 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:07:54.541105 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 8 00:07:54.541367 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 8 00:07:54.547029 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:07:54.547176 systemd-tmpfiles[1269]: Skipping /boot May 8 00:07:54.573175 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:07:54.573354 systemd-tmpfiles[1269]: Skipping /boot May 8 00:07:54.612661 zram_generator::config[1301]: No configuration found. May 8 00:07:54.744873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:07:54.813546 systemd[1]: Reloading finished in 396 ms. May 8 00:07:54.828953 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:07:54.848305 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:07:54.870854 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:07:54.873800 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:07:54.876423 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:07:54.883355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:07:54.888208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:07:54.892094 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:07:54.898123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:54.898301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:07:54.901963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:07:54.907046 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:07:54.912923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:07:54.914214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:07:54.914710 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:07:54.919379 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:07:54.920795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:54.922475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:07:54.922755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:07:54.924847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:07:54.925211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:07:54.927290 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:07:54.927528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:07:54.933457 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:07:54.942955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:54.943192 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:07:54.944051 systemd-udevd[1347]: Using default interface naming scheme 'v255'. May 8 00:07:54.944968 augenrules[1370]: No rules May 8 00:07:54.948898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:07:54.951953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:07:54.955875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:07:54.957163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:07:54.957276 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:07:54.959890 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:07:54.961128 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:54.963824 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:07:54.964187 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:07:54.966718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:07:54.969225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:07:54.969541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:07:54.972057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:07:54.980511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:07:54.982838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:07:54.985787 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:07:54.988024 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:07:54.988790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:07:54.998545 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:07:55.019456 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:07:55.031448 systemd[1]: Finished ensure-sysext.service. May 8 00:07:55.038851 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:55.046825 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:07:55.048233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:07:55.055817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:07:55.059853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:07:55.063772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:07:55.072600 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1393) May 8 00:07:55.069777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:07:55.071586 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:07:55.071635 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:07:55.080097 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:07:55.089998 augenrules[1411]: /sbin/augenrules: No change May 8 00:07:55.092844 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:07:55.095331 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:07:55.095377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:07:55.096545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:07:55.096922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:07:55.099424 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:07:55.099951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:07:55.102132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:07:55.102475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:07:55.111091 systemd-resolved[1341]: Positive Trust Anchors: May 8 00:07:55.111111 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:07:55.111156 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:07:55.116839 systemd-resolved[1341]: Defaulting to hostname 'linux'. May 8 00:07:55.119155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:07:55.122858 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:07:55.142059 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:07:55.142849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:07:55.147881 augenrules[1440]: No rules May 8 00:07:55.160942 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:07:55.161394 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:07:55.170109 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:07:55.171796 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:07:55.171877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:07:55.179189 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:07:55.193831 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:07:55.197571 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:07:55.203619 kernel: ACPI: button: Power Button [PWRF] May 8 00:07:55.211863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:07:55.225869 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:07:55.226831 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:07:55.227042 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:07:55.222226 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:07:55.224862 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:07:55.229515 systemd-networkd[1423]: lo: Link UP May 8 00:07:55.229532 systemd-networkd[1423]: lo: Gained carrier May 8 00:07:55.231955 systemd-networkd[1423]: Enumeration completed May 8 00:07:55.232033 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:07:55.233410 systemd[1]: Reached target network.target - Network. May 8 00:07:55.233928 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:07:55.233932 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:07:55.236313 systemd-networkd[1423]: eth0: Link UP May 8 00:07:55.236326 systemd-networkd[1423]: eth0: Gained carrier May 8 00:07:55.236343 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:07:55.241592 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:07:55.246802 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:07:55.254759 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:07:55.260671 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:07:55.261633 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. May 8 00:07:55.879051 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:07:55.879113 systemd-timesyncd[1427]: Initial clock synchronization to Thu 2025-05-08 00:07:55.878954 UTC. May 8 00:07:55.879157 systemd-resolved[1341]: Clock change detected. Flushing caches. May 8 00:07:55.951624 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:07:55.968095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:07:55.969721 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:07:55.980781 kernel: kvm_amd: TSC scaling supported May 8 00:07:55.980831 kernel: kvm_amd: Nested Virtualization enabled May 8 00:07:55.980851 kernel: kvm_amd: Nested Paging enabled May 8 00:07:55.980868 kernel: kvm_amd: LBR virtualization supported May 8 00:07:55.981976 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:07:55.982005 kernel: kvm_amd: Virtual GIF supported May 8 00:07:56.006616 kernel: EDAC MC: Ver: 3.0.0 May 8 00:07:56.043854 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:07:56.077744 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:07:56.079754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:07:56.086630 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:07:56.125190 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:07:56.127043 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:07:56.128215 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:07:56.129483 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:07:56.130855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:07:56.132646 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:07:56.133971 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:07:56.135417 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:07:56.136849 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:07:56.136886 systemd[1]: Reached target paths.target - Path Units. May 8 00:07:56.137887 systemd[1]: Reached target timers.target - Timer Units. May 8 00:07:56.139938 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:07:56.143142 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:07:56.147773 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:07:56.149429 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:07:56.150877 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:07:56.158772 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:07:56.160347 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:07:56.163016 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:07:56.165013 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:07:56.166451 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:07:56.167627 systemd[1]: Reached target basic.target - Basic System. May 8 00:07:56.168718 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:07:56.168764 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:07:56.170145 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:07:56.172671 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:07:56.177745 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:07:56.180857 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:07:56.182294 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:07:56.184077 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:07:56.186830 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:07:56.187362 jq[1478]: false May 8 00:07:56.191760 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:07:56.196256 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:07:56.201865 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:07:56.206186 extend-filesystems[1479]: Found loop3 May 8 00:07:56.206186 extend-filesystems[1479]: Found loop4 May 8 00:07:56.206186 extend-filesystems[1479]: Found loop5 May 8 00:07:56.206186 extend-filesystems[1479]: Found sr0 May 8 00:07:56.206186 extend-filesystems[1479]: Found vda May 8 00:07:56.206186 extend-filesystems[1479]: Found vda1 May 8 00:07:56.217636 extend-filesystems[1479]: Found vda2 May 8 00:07:56.217636 extend-filesystems[1479]: Found vda3 May 8 00:07:56.217636 extend-filesystems[1479]: Found usr May 8 00:07:56.217636 extend-filesystems[1479]: Found vda4 May 8 00:07:56.217636 extend-filesystems[1479]: Found vda6 May 8 00:07:56.217636 extend-filesystems[1479]: Found vda7 May 8 00:07:56.217636 extend-filesystems[1479]: Found vda9 May 8 00:07:56.217636 extend-filesystems[1479]: Checking size of /dev/vda9 May 8 00:07:56.210682 dbus-daemon[1477]: [system] SELinux support is enabled May 8 00:07:56.226939 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:07:56.227095 extend-filesystems[1479]: Resized partition /dev/vda9 May 8 00:07:56.231808 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) May 8 00:07:56.234433 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:07:56.235267 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:07:56.237638 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:07:56.236286 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:07:56.240647 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1392) May 8 00:07:56.249786 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:07:56.253861 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:07:56.256095 jq[1499]: true May 8 00:07:56.260669 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:07:56.275129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:07:56.298792 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:07:56.298860 update_engine[1498]: I20250508 00:07:56.283737 1498 main.cc:92] Flatcar Update Engine starting May 8 00:07:56.298860 update_engine[1498]: I20250508 00:07:56.286178 1498 update_check_scheduler.cc:74] Next update check in 10m34s May 8 00:07:56.275540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:07:56.276116 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:07:56.276472 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:07:56.281387 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:07:56.283071 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:07:56.293012 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:07:56.303869 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:07:56.303869 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:07:56.303869 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:07:56.303353 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:07:56.305225 extend-filesystems[1479]: Resized filesystem in /dev/vda9 May 8 00:07:56.303864 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:07:56.305549 jq[1504]: true May 8 00:07:56.321321 systemd-logind[1492]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:07:56.321363 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:07:56.324629 systemd-logind[1492]: New seat seat0. May 8 00:07:56.332244 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:07:56.348851 tar[1503]: linux-amd64/helm May 8 00:07:56.362608 systemd[1]: Started update-engine.service - Update Engine. May 8 00:07:56.366570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:07:56.366834 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:07:56.369416 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:07:56.369565 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:07:56.377862 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:07:56.411740 bash[1533]: Updated "/home/core/.ssh/authorized_keys" May 8 00:07:56.414555 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:07:56.418560 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:07:56.444023 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:07:56.480955 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:07:56.510263 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:07:56.519289 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:07:56.531057 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:07:56.531614 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:07:56.535700 containerd[1505]: time="2025-05-08T00:07:56.535545280Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:07:56.538874 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:07:56.564087 containerd[1505]: time="2025-05-08T00:07:56.564028457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566101565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566149034Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566169964Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566369648Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566385929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566467942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566489513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566809894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566825363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566839389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:07:56.567917 containerd[1505]: time="2025-05-08T00:07:56.566850320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:07:56.568181 containerd[1505]: time="2025-05-08T00:07:56.566995071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:07:56.568181 containerd[1505]: time="2025-05-08T00:07:56.567306235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:07:56.568181 containerd[1505]: time="2025-05-08T00:07:56.567536627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:07:56.568181 containerd[1505]: time="2025-05-08T00:07:56.567558999Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:07:56.568181 containerd[1505]: time="2025-05-08T00:07:56.567694834Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:07:56.568181 containerd[1505]: time="2025-05-08T00:07:56.567751390Z" level=info msg="metadata content store policy set" policy=shared May 8 00:07:56.569450 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:07:56.579914 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:07:56.581847 containerd[1505]: time="2025-05-08T00:07:56.581777218Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:07:56.581938 containerd[1505]: time="2025-05-08T00:07:56.581868419Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:07:56.581938 containerd[1505]: time="2025-05-08T00:07:56.581886363Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:07:56.581938 containerd[1505]: time="2025-05-08T00:07:56.581902373Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:07:56.581938 containerd[1505]: time="2025-05-08T00:07:56.581917592Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:07:56.582182 containerd[1505]: time="2025-05-08T00:07:56.582155388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:07:56.582464 containerd[1505]: time="2025-05-08T00:07:56.582422459Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:07:56.582719 containerd[1505]: time="2025-05-08T00:07:56.582576187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:07:56.582719 containerd[1505]: time="2025-05-08T00:07:56.582700931Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:07:56.582719 containerd[1505]: time="2025-05-08T00:07:56.582721720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:07:56.582813 containerd[1505]: time="2025-05-08T00:07:56.582741828Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582813 containerd[1505]: time="2025-05-08T00:07:56.582760292Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582813 containerd[1505]: time="2025-05-08T00:07:56.582777495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582813 containerd[1505]: time="2025-05-08T00:07:56.582795528Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582931 containerd[1505]: time="2025-05-08T00:07:56.582814634Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582931 containerd[1505]: time="2025-05-08T00:07:56.582835423Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582931 containerd[1505]: time="2025-05-08T00:07:56.582863907Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582931 containerd[1505]: time="2025-05-08T00:07:56.582883303Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:07:56.582931 containerd[1505]: time="2025-05-08T00:07:56.582915584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:07:56.582931 containerd[1505]: time="2025-05-08T00:07:56.582935311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583058 containerd[1505]: time="2025-05-08T00:07:56.582953044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583058 containerd[1505]: time="2025-05-08T00:07:56.582969034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583058 containerd[1505]: time="2025-05-08T00:07:56.582985946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583058 containerd[1505]: time="2025-05-08T00:07:56.583003949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583058 containerd[1505]: time="2025-05-08T00:07:56.583018727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583058 containerd[1505]: time="2025-05-08T00:07:56.583034817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583058 containerd[1505]: time="2025-05-08T00:07:56.583051899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583201 containerd[1505]: time="2025-05-08T00:07:56.583070905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583201 containerd[1505]: time="2025-05-08T00:07:56.583088298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583201 containerd[1505]: time="2025-05-08T00:07:56.583104107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583201 containerd[1505]: time="2025-05-08T00:07:56.583130276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583201 containerd[1505]: time="2025-05-08T00:07:56.583151877Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:07:56.583201 containerd[1505]: time="2025-05-08T00:07:56.583181202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583201 containerd[1505]: time="2025-05-08T00:07:56.583199777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583410 containerd[1505]: time="2025-05-08T00:07:56.583217740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:07:56.583410 containerd[1505]: time="2025-05-08T00:07:56.583285738Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:07:56.583451 containerd[1505]: time="2025-05-08T00:07:56.583313330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:07:56.583451 containerd[1505]: time="2025-05-08T00:07:56.583437613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:07:56.583492 containerd[1505]: time="2025-05-08T00:07:56.583458823Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:07:56.583492 containerd[1505]: time="2025-05-08T00:07:56.583473751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:07:56.583532 containerd[1505]: time="2025-05-08T00:07:56.583491574Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:07:56.583532 containerd[1505]: time="2025-05-08T00:07:56.583520638Z" level=info msg="NRI interface is disabled by configuration." May 8 00:07:56.583576 containerd[1505]: time="2025-05-08T00:07:56.583535777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:07:56.584064 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:07:56.585002 containerd[1505]: time="2025-05-08T00:07:56.583920138Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:07:56.585002 containerd[1505]: time="2025-05-08T00:07:56.583978928Z" level=info msg="Connect containerd service" May 8 00:07:56.585002 containerd[1505]: time="2025-05-08T00:07:56.584012601Z" level=info msg="using legacy CRI server" May 8 00:07:56.585002 containerd[1505]: time="2025-05-08T00:07:56.584022139Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:07:56.585002 containerd[1505]: time="2025-05-08T00:07:56.584161270Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:07:56.585239 containerd[1505]: time="2025-05-08T00:07:56.585058784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:07:56.585239 containerd[1505]: time="2025-05-08T00:07:56.585212532Z" level=info msg="Start subscribing containerd event" May 8 00:07:56.585284 containerd[1505]: time="2025-05-08T00:07:56.585265531Z" level=info msg="Start recovering state" May 8 00:07:56.586696 containerd[1505]: time="2025-05-08T00:07:56.585345191Z" level=info msg="Start event monitor" May 8 00:07:56.586696 containerd[1505]: time="2025-05-08T00:07:56.585368725Z" level=info msg="Start snapshots syncer" May 8 00:07:56.586696 containerd[1505]: time="2025-05-08T00:07:56.585380697Z" level=info msg="Start cni network conf syncer for default" May 8 00:07:56.586696 containerd[1505]: time="2025-05-08T00:07:56.585402098Z" level=info msg="Start streaming server" May 8 00:07:56.586696 containerd[1505]: time="2025-05-08T00:07:56.585910852Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:07:56.586696 containerd[1505]: time="2025-05-08T00:07:56.586063939Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:07:56.585567 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:07:56.589850 containerd[1505]: time="2025-05-08T00:07:56.589145950Z" level=info msg="containerd successfully booted in 0.055375s" May 8 00:07:56.589201 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:07:56.759556 tar[1503]: linux-amd64/LICENSE May 8 00:07:56.759678 tar[1503]: linux-amd64/README.md May 8 00:07:56.774704 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:07:57.816871 systemd-networkd[1423]: eth0: Gained IPv6LL May 8 00:07:57.820852 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:07:57.823071 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:07:57.834075 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:07:57.837413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:57.840042 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:07:57.863921 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:07:57.864293 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:07:57.866269 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:07:57.870183 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:07:58.689201 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:07:58.692668 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:33884.service - OpenSSH per-connection server daemon (10.0.0.1:33884). May 8 00:07:58.833564 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 33884 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:07:58.897899 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:58.911431 systemd-logind[1492]: New session 1 of user core. May 8 00:07:58.913260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:07:58.928967 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:07:58.950297 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:07:58.998116 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:07:59.032374 (systemd)[1590]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:07:59.035682 systemd-logind[1492]: New session c1 of user core. May 8 00:07:59.256644 systemd[1590]: Queued start job for default target default.target. May 8 00:07:59.291376 systemd[1590]: Created slice app.slice - User Application Slice. May 8 00:07:59.291412 systemd[1590]: Reached target paths.target - Paths. May 8 00:07:59.291471 systemd[1590]: Reached target timers.target - Timers. May 8 00:07:59.293553 systemd[1590]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:07:59.308623 systemd[1590]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:07:59.308834 systemd[1590]: Reached target sockets.target - Sockets. May 8 00:07:59.308908 systemd[1590]: Reached target basic.target - Basic System. May 8 00:07:59.308976 systemd[1590]: Reached target default.target - Main User Target. May 8 00:07:59.309028 systemd[1590]: Startup finished in 259ms. May 8 00:07:59.309533 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:07:59.317016 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:07:59.498005 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). May 8 00:07:59.538566 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:07:59.541211 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:59.546503 systemd-logind[1492]: New session 2 of user core. May 8 00:07:59.604886 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:07:59.646857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:59.648754 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:07:59.650200 systemd[1]: Startup finished in 1.062s (kernel) + 7.516s (initrd) + 6.756s (userspace) = 15.335s. May 8 00:07:59.663806 sshd[1603]: Connection closed by 10.0.0.1 port 33890 May 8 00:07:59.666132 sshd-session[1601]: pam_unix(sshd:session): session closed for user core May 8 00:07:59.675017 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:07:59.678873 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:33890.service: Deactivated successfully. May 8 00:07:59.681452 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:07:59.683590 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. May 8 00:07:59.686326 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:33902.service - OpenSSH per-connection server daemon (10.0.0.1:33902). May 8 00:07:59.687760 systemd-logind[1492]: Removed session 2. May 8 00:07:59.730882 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 33902 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:07:59.732651 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:59.738426 systemd-logind[1492]: New session 3 of user core. May 8 00:07:59.746768 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:07:59.798061 sshd[1621]: Connection closed by 10.0.0.1 port 33902 May 8 00:07:59.799322 sshd-session[1616]: pam_unix(sshd:session): session closed for user core May 8 00:07:59.813446 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:33902.service: Deactivated successfully. May 8 00:07:59.815949 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:07:59.816727 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. May 8 00:07:59.824376 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:33908.service - OpenSSH per-connection server daemon (10.0.0.1:33908). May 8 00:07:59.827002 systemd-logind[1492]: Removed session 3. May 8 00:07:59.866566 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 33908 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:07:59.867470 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:59.873223 systemd-logind[1492]: New session 4 of user core. May 8 00:07:59.882773 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:08:00.019337 sshd[1635]: Connection closed by 10.0.0.1 port 33908 May 8 00:08:00.020038 sshd-session[1626]: pam_unix(sshd:session): session closed for user core May 8 00:08:00.028991 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:33908.service: Deactivated successfully. May 8 00:08:00.031157 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:08:00.033158 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. May 8 00:08:00.039913 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:33912.service - OpenSSH per-connection server daemon (10.0.0.1:33912). May 8 00:08:00.040983 systemd-logind[1492]: Removed session 4. May 8 00:08:00.076496 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 33912 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:08:00.078966 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:00.084321 systemd-logind[1492]: New session 5 of user core. May 8 00:08:00.097840 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:08:00.173667 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:08:00.174123 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:08:00.189985 sudo[1644]: pam_unix(sudo:session): session closed for user root May 8 00:08:00.191827 sshd[1643]: Connection closed by 10.0.0.1 port 33912 May 8 00:08:00.193352 sshd-session[1640]: pam_unix(sshd:session): session closed for user core May 8 00:08:00.204433 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:33912.service: Deactivated successfully. May 8 00:08:00.206389 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:08:00.207287 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. May 8 00:08:00.213860 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:33920.service - OpenSSH per-connection server daemon (10.0.0.1:33920). May 8 00:08:00.214937 systemd-logind[1492]: Removed session 5. May 8 00:08:00.255985 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 33920 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:08:00.258143 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:00.314955 systemd-logind[1492]: New session 6 of user core. May 8 00:08:00.320733 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:08:00.377796 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:08:00.378276 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:08:00.383628 sudo[1654]: pam_unix(sudo:session): session closed for user root May 8 00:08:00.390907 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:08:00.391265 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:08:00.408047 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:08:00.451950 augenrules[1676]: No rules May 8 00:08:00.454489 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:08:00.454878 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:08:00.456201 sudo[1653]: pam_unix(sudo:session): session closed for user root May 8 00:08:00.457985 sshd[1652]: Connection closed by 10.0.0.1 port 33920 May 8 00:08:00.460129 sshd-session[1649]: pam_unix(sshd:session): session closed for user core May 8 00:08:00.490234 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:33920.service: Deactivated successfully. May 8 00:08:00.492333 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:08:00.493139 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. May 8 00:08:00.502546 kubelet[1609]: E0508 00:08:00.502479 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:08:00.509083 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:33928.service - OpenSSH per-connection server daemon (10.0.0.1:33928). May 8 00:08:00.509821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:08:00.510023 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:08:00.510392 systemd[1]: kubelet.service: Consumed 2.415s CPU time, 238.6M memory peak. May 8 00:08:00.512886 systemd-logind[1492]: Removed session 6. May 8 00:08:00.545785 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 33928 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:08:00.547547 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:00.552355 systemd-logind[1492]: New session 7 of user core. May 8 00:08:00.566724 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:08:00.621961 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:08:00.622323 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:08:01.780917 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:08:01.781085 (dockerd)[1710]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:08:03.114670 dockerd[1710]: time="2025-05-08T00:08:03.114523122Z" level=info msg="Starting up" May 8 00:08:04.401200 dockerd[1710]: time="2025-05-08T00:08:04.401122926Z" level=info msg="Loading containers: start." May 8 00:08:04.619635 kernel: Initializing XFRM netlink socket May 8 00:08:04.728902 systemd-networkd[1423]: docker0: Link UP May 8 00:08:04.780036 dockerd[1710]: time="2025-05-08T00:08:04.779964464Z" level=info msg="Loading containers: done." May 8 00:08:04.817356 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2499150148-merged.mount: Deactivated successfully. May 8 00:08:04.819890 dockerd[1710]: time="2025-05-08T00:08:04.819839198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:08:04.820018 dockerd[1710]: time="2025-05-08T00:08:04.819993397Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:08:04.820188 dockerd[1710]: time="2025-05-08T00:08:04.820157465Z" level=info msg="Daemon has completed initialization" May 8 00:08:04.868887 dockerd[1710]: time="2025-05-08T00:08:04.868780177Z" level=info msg="API listen on /run/docker.sock" May 8 00:08:04.869052 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:08:05.888849 containerd[1505]: time="2025-05-08T00:08:05.888781368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:08:06.736832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372039292.mount: Deactivated successfully. May 8 00:08:08.224421 containerd[1505]: time="2025-05-08T00:08:08.224354976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:08.226944 containerd[1505]: time="2025-05-08T00:08:08.226902164Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 8 00:08:08.228342 containerd[1505]: time="2025-05-08T00:08:08.228302481Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:08.231707 containerd[1505]: time="2025-05-08T00:08:08.231629952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:08.232535 containerd[1505]: time="2025-05-08T00:08:08.232492119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.343647513s" May 8 00:08:08.232629 containerd[1505]: time="2025-05-08T00:08:08.232539699Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 8 00:08:08.234746 containerd[1505]: time="2025-05-08T00:08:08.234714849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:08:10.074726 containerd[1505]: time="2025-05-08T00:08:10.074653708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:10.112286 containerd[1505]: time="2025-05-08T00:08:10.112149259Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 8 00:08:10.113994 containerd[1505]: time="2025-05-08T00:08:10.113958723Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:10.128142 containerd[1505]: time="2025-05-08T00:08:10.128055855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:10.129406 containerd[1505]: time="2025-05-08T00:08:10.129347909Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.894593596s" May 8 00:08:10.129475 containerd[1505]: time="2025-05-08T00:08:10.129409765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 8 00:08:10.130094 containerd[1505]: time="2025-05-08T00:08:10.129973402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:08:10.745520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:08:10.754072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:11.002878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:11.010846 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:08:11.576126 kubelet[1974]: E0508 00:08:11.576050 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:08:11.583648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:08:11.583879 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:08:11.584366 systemd[1]: kubelet.service: Consumed 698ms CPU time, 96M memory peak. May 8 00:08:12.408217 containerd[1505]: time="2025-05-08T00:08:12.408129842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:12.409053 containerd[1505]: time="2025-05-08T00:08:12.408987371Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 8 00:08:12.411816 containerd[1505]: time="2025-05-08T00:08:12.411774498Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:12.415486 containerd[1505]: time="2025-05-08T00:08:12.415426569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:12.417033 containerd[1505]: time="2025-05-08T00:08:12.416970956Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.286922513s" May 8 00:08:12.417033 containerd[1505]: time="2025-05-08T00:08:12.417027912Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 8 00:08:12.417689 containerd[1505]: time="2025-05-08T00:08:12.417610806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:08:13.541817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019781097.mount: Deactivated successfully. May 8 00:08:14.795607 containerd[1505]: time="2025-05-08T00:08:14.795506248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:14.796537 containerd[1505]: time="2025-05-08T00:08:14.796494972Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 8 00:08:14.798011 containerd[1505]: time="2025-05-08T00:08:14.797973145Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:14.800505 containerd[1505]: time="2025-05-08T00:08:14.800467404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:14.801441 containerd[1505]: time="2025-05-08T00:08:14.801403479Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.383760823s" May 8 00:08:14.801441 containerd[1505]: time="2025-05-08T00:08:14.801436842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 8 00:08:14.802089 containerd[1505]: time="2025-05-08T00:08:14.802031988Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:08:15.613757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732865496.mount: Deactivated successfully. May 8 00:08:18.734774 containerd[1505]: time="2025-05-08T00:08:18.734690051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:18.811486 containerd[1505]: time="2025-05-08T00:08:18.811405407Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:08:18.870203 containerd[1505]: time="2025-05-08T00:08:18.870109379Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:18.931751 containerd[1505]: time="2025-05-08T00:08:18.931656584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:18.932958 containerd[1505]: time="2025-05-08T00:08:18.932889907Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 4.130818064s" May 8 00:08:18.932958 containerd[1505]: time="2025-05-08T00:08:18.932949218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:08:18.933612 containerd[1505]: time="2025-05-08T00:08:18.933521422Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:08:19.674312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862325002.mount: Deactivated successfully. May 8 00:08:19.683789 containerd[1505]: time="2025-05-08T00:08:19.683696075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:19.685444 containerd[1505]: time="2025-05-08T00:08:19.685354015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:08:19.686843 containerd[1505]: time="2025-05-08T00:08:19.686791060Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:19.690678 containerd[1505]: time="2025-05-08T00:08:19.690640370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:19.691293 containerd[1505]: time="2025-05-08T00:08:19.691249242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 757.693366ms" May 8 00:08:19.691293 containerd[1505]: time="2025-05-08T00:08:19.691281834Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:08:19.691940 containerd[1505]: time="2025-05-08T00:08:19.691880627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:08:21.510020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194048530.mount: Deactivated successfully. May 8 00:08:21.793969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:08:21.799771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:22.018909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:22.023763 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:08:22.188704 kubelet[2062]: E0508 00:08:22.188391 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:08:22.193190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:08:22.193427 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:08:22.193903 systemd[1]: kubelet.service: Consumed 347ms CPU time, 97.4M memory peak. May 8 00:08:25.625655 containerd[1505]: time="2025-05-08T00:08:25.625547179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:25.748076 containerd[1505]: time="2025-05-08T00:08:25.747987202Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 8 00:08:25.792513 containerd[1505]: time="2025-05-08T00:08:25.792419203Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:26.070839 containerd[1505]: time="2025-05-08T00:08:26.070732840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:26.072487 containerd[1505]: time="2025-05-08T00:08:26.072415196Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.380488834s" May 8 00:08:26.072487 containerd[1505]: time="2025-05-08T00:08:26.072484115Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 8 00:08:28.860320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:28.860499 systemd[1]: kubelet.service: Consumed 347ms CPU time, 97.4M memory peak. May 8 00:08:28.879021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:28.909887 systemd[1]: Reload requested from client PID 2142 ('systemctl') (unit session-7.scope)... May 8 00:08:28.909912 systemd[1]: Reloading... May 8 00:08:29.014626 zram_generator::config[2186]: No configuration found. May 8 00:08:29.507214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:08:29.631632 systemd[1]: Reloading finished in 721 ms. May 8 00:08:29.685682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:29.689175 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:08:29.689473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:29.689528 systemd[1]: kubelet.service: Consumed 158ms CPU time, 83.6M memory peak. May 8 00:08:29.691495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:30.430174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:30.435823 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:08:30.477374 kubelet[2236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:30.477374 kubelet[2236]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:08:30.477374 kubelet[2236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:30.478818 kubelet[2236]: I0508 00:08:30.478766 2236 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:08:30.721152 kubelet[2236]: I0508 00:08:30.720998 2236 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:08:30.721152 kubelet[2236]: I0508 00:08:30.721047 2236 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:08:30.721385 kubelet[2236]: I0508 00:08:30.721363 2236 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:08:30.801750 kubelet[2236]: I0508 00:08:30.801700 2236 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:08:30.810266 kubelet[2236]: E0508 00:08:30.810190 2236 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:30.850722 kubelet[2236]: E0508 00:08:30.850661 2236 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:08:30.850722 kubelet[2236]: I0508 00:08:30.850713 2236 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:08:30.858457 kubelet[2236]: I0508 00:08:30.858378 2236 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:08:30.862236 kubelet[2236]: I0508 00:08:30.862192 2236 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:08:30.862409 kubelet[2236]: I0508 00:08:30.862373 2236 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:08:30.862897 kubelet[2236]: I0508 00:08:30.862401 2236 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:08:30.863478 kubelet[2236]: I0508 00:08:30.863084 2236 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:08:30.863478 kubelet[2236]: I0508 00:08:30.863106 2236 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:08:30.863478 kubelet[2236]: I0508 00:08:30.863258 2236 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:30.890037 kubelet[2236]: I0508 00:08:30.889996 2236 kubelet.go:408] "Attempting to sync node with API server" May 8 00:08:30.890037 kubelet[2236]: I0508 00:08:30.890033 2236 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:08:30.890177 kubelet[2236]: I0508 00:08:30.890087 2236 kubelet.go:314] "Adding apiserver pod source" May 8 00:08:30.890177 kubelet[2236]: I0508 00:08:30.890110 2236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:08:30.895923 kubelet[2236]: W0508 00:08:30.895882 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:30.896073 kubelet[2236]: E0508 00:08:30.896040 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:30.896073 kubelet[2236]: W0508 00:08:30.895898 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:30.896180 kubelet[2236]: E0508 00:08:30.896088 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:30.896180 kubelet[2236]: I0508 00:08:30.896174 2236 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:08:30.908385 kubelet[2236]: I0508 00:08:30.908358 2236 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:08:30.909006 kubelet[2236]: W0508 00:08:30.908976 2236 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:08:30.910076 kubelet[2236]: I0508 00:08:30.909773 2236 server.go:1269] "Started kubelet" May 8 00:08:30.910489 kubelet[2236]: I0508 00:08:30.910409 2236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:08:30.910835 kubelet[2236]: I0508 00:08:30.910796 2236 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:08:30.910894 kubelet[2236]: I0508 00:08:30.910859 2236 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:08:30.911532 kubelet[2236]: I0508 00:08:30.911505 2236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:08:30.911764 kubelet[2236]: I0508 00:08:30.911745 2236 server.go:460] "Adding debug handlers to kubelet server" May 8 00:08:30.912934 kubelet[2236]: I0508 00:08:30.912911 2236 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:08:30.915041 kubelet[2236]: I0508 00:08:30.914864 2236 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:08:30.915041 kubelet[2236]: I0508 00:08:30.914964 2236 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:08:30.915041 kubelet[2236]: I0508 00:08:30.915016 2236 reconciler.go:26] "Reconciler: start to sync state" May 8 00:08:30.915322 kubelet[2236]: W0508 00:08:30.915290 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:30.915373 kubelet[2236]: E0508 00:08:30.915328 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:30.915766 kubelet[2236]: I0508 00:08:30.915707 2236 factory.go:221] Registration of the systemd container factory successfully May 8 00:08:30.915824 kubelet[2236]: I0508 00:08:30.915782 2236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:08:30.916176 kubelet[2236]: E0508 00:08:30.916154 2236 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:08:30.916385 kubelet[2236]: E0508 00:08:30.916365 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:30.916464 kubelet[2236]: E0508 00:08:30.916422 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" May 8 00:08:30.917130 kubelet[2236]: I0508 00:08:30.917098 2236 factory.go:221] Registration of the containerd container factory successfully May 8 00:08:30.929742 kubelet[2236]: I0508 00:08:30.929692 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:08:30.931191 kubelet[2236]: I0508 00:08:30.931173 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:08:30.931222 kubelet[2236]: I0508 00:08:30.931199 2236 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:08:30.931222 kubelet[2236]: I0508 00:08:30.931221 2236 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:08:30.931278 kubelet[2236]: E0508 00:08:30.931259 2236 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:08:30.939404 kubelet[2236]: W0508 00:08:30.939326 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:30.939404 kubelet[2236]: E0508 00:08:30.939389 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:30.943213 kubelet[2236]: E0508 00:08:30.941280 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d64a9cb20018b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:08:30.909743499 +0000 UTC m=+0.468271810,LastTimestamp:2025-05-08 00:08:30.909743499 +0000 UTC m=+0.468271810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:08:30.956463 kubelet[2236]: I0508 00:08:30.956428 2236 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:08:30.956463 kubelet[2236]: I0508 00:08:30.956452 2236 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:08:30.956667 kubelet[2236]: I0508 00:08:30.956481 2236 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:31.016925 kubelet[2236]: E0508 00:08:31.016840 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:31.032144 kubelet[2236]: E0508 00:08:31.032055 2236 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:08:31.116973 kubelet[2236]: E0508 00:08:31.116920 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:31.117238 kubelet[2236]: E0508 00:08:31.117159 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" May 8 00:08:31.217953 kubelet[2236]: E0508 00:08:31.217846 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:31.232667 kubelet[2236]: E0508 00:08:31.232560 2236 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:08:31.257727 kubelet[2236]: I0508 00:08:31.257659 2236 policy_none.go:49] "None policy: Start" May 8 00:08:31.258456 kubelet[2236]: I0508 00:08:31.258423 2236 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:08:31.258456 kubelet[2236]: I0508 00:08:31.258453 2236 state_mem.go:35] "Initializing new in-memory state store" May 8 00:08:31.264776 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:08:31.276609 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:08:31.280273 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:08:31.288620 kubelet[2236]: I0508 00:08:31.288564 2236 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:08:31.288926 kubelet[2236]: I0508 00:08:31.288908 2236 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:08:31.288981 kubelet[2236]: I0508 00:08:31.288927 2236 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:08:31.289242 kubelet[2236]: I0508 00:08:31.289220 2236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:08:31.290317 kubelet[2236]: E0508 00:08:31.290289 2236 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:08:31.390781 kubelet[2236]: I0508 00:08:31.390739 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:31.391189 kubelet[2236]: E0508 00:08:31.391107 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 8 00:08:31.518962 kubelet[2236]: E0508 00:08:31.518862 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" May 8 00:08:31.593437 kubelet[2236]: I0508 00:08:31.593156 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:31.593527 kubelet[2236]: E0508 00:08:31.593492 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 8 00:08:31.643251 systemd[1]: Created slice kubepods-burstable-pod717c4a9af07d76113a41884f1a76de84.slice - libcontainer container kubepods-burstable-pod717c4a9af07d76113a41884f1a76de84.slice. May 8 00:08:31.670012 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:08:31.694689 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:08:31.720753 kubelet[2236]: I0508 00:08:31.720611 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:31.720753 kubelet[2236]: I0508 00:08:31.720670 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:31.720753 kubelet[2236]: I0508 00:08:31.720723 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:31.720753 kubelet[2236]: I0508 00:08:31.720762 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:08:31.720753 kubelet[2236]: I0508 00:08:31.720794 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/717c4a9af07d76113a41884f1a76de84-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"717c4a9af07d76113a41884f1a76de84\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:31.721202 kubelet[2236]: I0508 00:08:31.720872 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/717c4a9af07d76113a41884f1a76de84-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"717c4a9af07d76113a41884f1a76de84\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:31.721202 kubelet[2236]: I0508 00:08:31.720929 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:31.721202 kubelet[2236]: I0508 00:08:31.720960 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/717c4a9af07d76113a41884f1a76de84-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"717c4a9af07d76113a41884f1a76de84\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:31.721202 kubelet[2236]: I0508 00:08:31.720982 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:31.930909 kubelet[2236]: W0508 00:08:31.930697 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:31.930909 kubelet[2236]: E0508 00:08:31.930781 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:31.962619 kubelet[2236]: W0508 00:08:31.962503 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:31.962619 kubelet[2236]: E0508 00:08:31.962575 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:31.968188 kubelet[2236]: E0508 00:08:31.968148 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:31.968820 containerd[1505]: time="2025-05-08T00:08:31.968764271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:717c4a9af07d76113a41884f1a76de84,Namespace:kube-system,Attempt:0,}" May 8 00:08:31.993141 kubelet[2236]: E0508 00:08:31.993096 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:31.993974 containerd[1505]: time="2025-05-08T00:08:31.993666334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:08:31.995128 kubelet[2236]: I0508 00:08:31.995082 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:31.995553 kubelet[2236]: E0508 00:08:31.995517 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 8 00:08:31.997737 kubelet[2236]: E0508 00:08:31.997708 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:31.998155 containerd[1505]: time="2025-05-08T00:08:31.998121148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:08:32.213659 kubelet[2236]: W0508 00:08:32.213435 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:32.213659 kubelet[2236]: E0508 00:08:32.213526 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:32.320068 kubelet[2236]: E0508 00:08:32.319976 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" May 8 00:08:32.441792 kubelet[2236]: W0508 00:08:32.441690 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:32.441792 kubelet[2236]: E0508 00:08:32.441786 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:32.493769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1989872958.mount: Deactivated successfully. May 8 00:08:32.611721 containerd[1505]: time="2025-05-08T00:08:32.611643733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:32.616728 containerd[1505]: time="2025-05-08T00:08:32.616671321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:08:32.617824 containerd[1505]: time="2025-05-08T00:08:32.617793515Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:32.619010 containerd[1505]: time="2025-05-08T00:08:32.618949915Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:32.620046 containerd[1505]: time="2025-05-08T00:08:32.619978961Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:32.624303 containerd[1505]: time="2025-05-08T00:08:32.624261936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:08:32.625346 containerd[1505]: time="2025-05-08T00:08:32.625301502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:08:32.627026 containerd[1505]: time="2025-05-08T00:08:32.626977675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:32.628948 containerd[1505]: time="2025-05-08T00:08:32.628897094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.119215ms" May 8 00:08:32.629500 containerd[1505]: time="2025-05-08T00:08:32.629466011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.609424ms" May 8 00:08:32.632340 containerd[1505]: time="2025-05-08T00:08:32.632298103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.089748ms" May 8 00:08:32.796912 kubelet[2236]: I0508 00:08:32.796847 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:32.797323 kubelet[2236]: E0508 00:08:32.797156 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 8 00:08:32.930373 kubelet[2236]: E0508 00:08:32.930186 2236 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:32.951915 containerd[1505]: time="2025-05-08T00:08:32.950338170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:32.952148 containerd[1505]: time="2025-05-08T00:08:32.951806736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:32.952891 containerd[1505]: time="2025-05-08T00:08:32.952809233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:32.953083 containerd[1505]: time="2025-05-08T00:08:32.953033131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:32.982657 containerd[1505]: time="2025-05-08T00:08:32.982518504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:32.982657 containerd[1505]: time="2025-05-08T00:08:32.982571155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:32.982657 containerd[1505]: time="2025-05-08T00:08:32.982600420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:32.983204 containerd[1505]: time="2025-05-08T00:08:32.982687948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:32.986578 containerd[1505]: time="2025-05-08T00:08:32.985010877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:32.986578 containerd[1505]: time="2025-05-08T00:08:32.985126407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:32.986578 containerd[1505]: time="2025-05-08T00:08:32.985149582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:32.987905 containerd[1505]: time="2025-05-08T00:08:32.987832168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:32.989860 systemd[1]: Started cri-containerd-176f6ccc842fc8d1c15143c6d453d69828b39f6799aef49f9285773321f433ee.scope - libcontainer container 176f6ccc842fc8d1c15143c6d453d69828b39f6799aef49f9285773321f433ee. May 8 00:08:33.016758 systemd[1]: Started cri-containerd-c091cbc047c0fe74473e2e27dae12f596aeb1148ccae20b0427ec8a76df8a407.scope - libcontainer container c091cbc047c0fe74473e2e27dae12f596aeb1148ccae20b0427ec8a76df8a407. May 8 00:08:33.020543 systemd[1]: Started cri-containerd-b52c1ffe65f0a56d5b39f70ab43941b4ad3a467f6cb185d49f1d33626fc3b5ab.scope - libcontainer container b52c1ffe65f0a56d5b39f70ab43941b4ad3a467f6cb185d49f1d33626fc3b5ab. May 8 00:08:33.082439 containerd[1505]: time="2025-05-08T00:08:33.081792070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"176f6ccc842fc8d1c15143c6d453d69828b39f6799aef49f9285773321f433ee\"" May 8 00:08:33.083473 kubelet[2236]: E0508 00:08:33.083005 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:33.088711 containerd[1505]: time="2025-05-08T00:08:33.088651814Z" level=info msg="CreateContainer within sandbox \"176f6ccc842fc8d1c15143c6d453d69828b39f6799aef49f9285773321f433ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:08:33.089398 containerd[1505]: time="2025-05-08T00:08:33.089353243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:717c4a9af07d76113a41884f1a76de84,Namespace:kube-system,Attempt:0,} returns sandbox id \"c091cbc047c0fe74473e2e27dae12f596aeb1148ccae20b0427ec8a76df8a407\"" May 8 00:08:33.090806 kubelet[2236]: E0508 00:08:33.090769 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:33.092315 containerd[1505]: time="2025-05-08T00:08:33.092290328Z" level=info msg="CreateContainer within sandbox \"c091cbc047c0fe74473e2e27dae12f596aeb1148ccae20b0427ec8a76df8a407\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:08:33.095915 containerd[1505]: time="2025-05-08T00:08:33.095844252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b52c1ffe65f0a56d5b39f70ab43941b4ad3a467f6cb185d49f1d33626fc3b5ab\"" May 8 00:08:33.096714 kubelet[2236]: E0508 00:08:33.096688 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:33.098317 containerd[1505]: time="2025-05-08T00:08:33.098242049Z" level=info msg="CreateContainer within sandbox \"b52c1ffe65f0a56d5b39f70ab43941b4ad3a467f6cb185d49f1d33626fc3b5ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:08:33.322953 containerd[1505]: time="2025-05-08T00:08:33.322879137Z" level=info msg="CreateContainer within sandbox \"c091cbc047c0fe74473e2e27dae12f596aeb1148ccae20b0427ec8a76df8a407\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d9506a49885b0fc7d18e6121aea512f9153d0a91f020d43233805b4af80042d\"" May 8 00:08:33.323765 containerd[1505]: time="2025-05-08T00:08:33.323730762Z" level=info msg="StartContainer for \"4d9506a49885b0fc7d18e6121aea512f9153d0a91f020d43233805b4af80042d\"" May 8 00:08:33.331527 containerd[1505]: time="2025-05-08T00:08:33.331356237Z" level=info msg="CreateContainer within sandbox \"176f6ccc842fc8d1c15143c6d453d69828b39f6799aef49f9285773321f433ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2b40b20e2f457b8622a55cbb8f36cd2b2860fa71b932d4695b8f0e2dbca38b2\"" May 8 00:08:33.333393 containerd[1505]: time="2025-05-08T00:08:33.332444344Z" level=info msg="StartContainer for \"e2b40b20e2f457b8622a55cbb8f36cd2b2860fa71b932d4695b8f0e2dbca38b2\"" May 8 00:08:33.333393 containerd[1505]: time="2025-05-08T00:08:33.333126146Z" level=info msg="CreateContainer within sandbox \"b52c1ffe65f0a56d5b39f70ab43941b4ad3a467f6cb185d49f1d33626fc3b5ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"07d2bc09536c70584ab8a31a48d8129f427fa700e47c591de1d1ed0d865d2016\"" May 8 00:08:33.334402 containerd[1505]: time="2025-05-08T00:08:33.334186359Z" level=info msg="StartContainer for \"07d2bc09536c70584ab8a31a48d8129f427fa700e47c591de1d1ed0d865d2016\"" May 8 00:08:33.431638 systemd[1]: Started cri-containerd-4d9506a49885b0fc7d18e6121aea512f9153d0a91f020d43233805b4af80042d.scope - libcontainer container 4d9506a49885b0fc7d18e6121aea512f9153d0a91f020d43233805b4af80042d. May 8 00:08:33.447968 systemd[1]: Started cri-containerd-07d2bc09536c70584ab8a31a48d8129f427fa700e47c591de1d1ed0d865d2016.scope - libcontainer container 07d2bc09536c70584ab8a31a48d8129f427fa700e47c591de1d1ed0d865d2016. May 8 00:08:33.452880 systemd[1]: Started cri-containerd-e2b40b20e2f457b8622a55cbb8f36cd2b2860fa71b932d4695b8f0e2dbca38b2.scope - libcontainer container e2b40b20e2f457b8622a55cbb8f36cd2b2860fa71b932d4695b8f0e2dbca38b2. May 8 00:08:33.556953 containerd[1505]: time="2025-05-08T00:08:33.556745782Z" level=info msg="StartContainer for \"07d2bc09536c70584ab8a31a48d8129f427fa700e47c591de1d1ed0d865d2016\" returns successfully" May 8 00:08:33.566628 containerd[1505]: time="2025-05-08T00:08:33.565780496Z" level=info msg="StartContainer for \"e2b40b20e2f457b8622a55cbb8f36cd2b2860fa71b932d4695b8f0e2dbca38b2\" returns successfully" May 8 00:08:33.570805 containerd[1505]: time="2025-05-08T00:08:33.570779409Z" level=info msg="StartContainer for \"4d9506a49885b0fc7d18e6121aea512f9153d0a91f020d43233805b4af80042d\" returns successfully" May 8 00:08:33.635044 kubelet[2236]: W0508 00:08:33.633393 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 8 00:08:33.635044 kubelet[2236]: E0508 00:08:33.633529 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:33.951733 kubelet[2236]: E0508 00:08:33.951478 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:33.955299 kubelet[2236]: E0508 00:08:33.953935 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:33.955948 kubelet[2236]: E0508 00:08:33.955916 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:34.399030 kubelet[2236]: I0508 00:08:34.398974 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:34.962790 kubelet[2236]: E0508 00:08:34.962742 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:35.204570 kubelet[2236]: E0508 00:08:35.204516 2236 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:08:35.323632 kubelet[2236]: I0508 00:08:35.322070 2236 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:08:35.323632 kubelet[2236]: E0508 00:08:35.322124 2236 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:08:35.895942 kubelet[2236]: I0508 00:08:35.895850 2236 apiserver.go:52] "Watching apiserver" May 8 00:08:35.916181 kubelet[2236]: I0508 00:08:35.916095 2236 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:08:37.887986 systemd[1]: Reload requested from client PID 2519 ('systemctl') (unit session-7.scope)... May 8 00:08:37.888003 systemd[1]: Reloading... May 8 00:08:37.993631 zram_generator::config[2566]: No configuration found. May 8 00:08:38.134571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:08:38.266936 systemd[1]: Reloading finished in 378 ms. May 8 00:08:38.290389 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:38.312022 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:08:38.312373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:38.312440 systemd[1]: kubelet.service: Consumed 906ms CPU time, 121.5M memory peak. May 8 00:08:38.323042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:38.522839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:38.528508 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:08:38.582961 kubelet[2608]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:38.582961 kubelet[2608]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:08:38.582961 kubelet[2608]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:38.583459 kubelet[2608]: I0508 00:08:38.583013 2608 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:08:38.593041 kubelet[2608]: I0508 00:08:38.591272 2608 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:08:38.593041 kubelet[2608]: I0508 00:08:38.591376 2608 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:08:38.593041 kubelet[2608]: I0508 00:08:38.591897 2608 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:08:38.595559 kubelet[2608]: I0508 00:08:38.595361 2608 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:08:38.597757 kubelet[2608]: I0508 00:08:38.597715 2608 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:08:38.602460 kubelet[2608]: E0508 00:08:38.602421 2608 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:08:38.602460 kubelet[2608]: I0508 00:08:38.602459 2608 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:08:38.610047 kubelet[2608]: I0508 00:08:38.610001 2608 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:08:38.610156 kubelet[2608]: I0508 00:08:38.610141 2608 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:08:38.610385 kubelet[2608]: I0508 00:08:38.610321 2608 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:08:38.610642 kubelet[2608]: I0508 00:08:38.610373 2608 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:08:38.610780 kubelet[2608]: I0508 00:08:38.610644 2608 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:08:38.610780 kubelet[2608]: I0508 00:08:38.610659 2608 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:08:38.610780 kubelet[2608]: I0508 00:08:38.610707 2608 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:38.615056 kubelet[2608]: I0508 00:08:38.610877 2608 kubelet.go:408] "Attempting to sync node with API server" May 8 00:08:38.615056 kubelet[2608]: I0508 00:08:38.611155 2608 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:08:38.615056 kubelet[2608]: I0508 00:08:38.611197 2608 kubelet.go:314] "Adding apiserver pod source" May 8 00:08:38.615056 kubelet[2608]: I0508 00:08:38.611217 2608 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:08:38.615056 kubelet[2608]: I0508 00:08:38.612047 2608 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:08:38.615056 kubelet[2608]: I0508 00:08:38.612566 2608 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:08:38.615056 kubelet[2608]: I0508 00:08:38.613083 2608 server.go:1269] "Started kubelet" May 8 00:08:38.620828 kubelet[2608]: I0508 00:08:38.620792 2608 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:08:38.624272 kubelet[2608]: I0508 00:08:38.621497 2608 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:08:38.625022 kubelet[2608]: I0508 00:08:38.624986 2608 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:08:38.625603 kubelet[2608]: I0508 00:08:38.625564 2608 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:08:38.626130 kubelet[2608]: I0508 00:08:38.625774 2608 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:08:38.627393 kubelet[2608]: I0508 00:08:38.627366 2608 server.go:460] "Adding debug handlers to kubelet server" May 8 00:08:38.628380 kubelet[2608]: I0508 00:08:38.627956 2608 factory.go:221] Registration of the systemd container factory successfully May 8 00:08:38.628380 kubelet[2608]: I0508 00:08:38.628102 2608 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:08:38.628380 kubelet[2608]: I0508 00:08:38.625869 2608 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:08:38.628380 kubelet[2608]: I0508 00:08:38.628345 2608 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:08:38.628380 kubelet[2608]: E0508 00:08:38.625860 2608 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:38.629090 kubelet[2608]: I0508 00:08:38.625927 2608 reconciler.go:26] "Reconciler: start to sync state" May 8 00:08:38.632623 kubelet[2608]: I0508 00:08:38.632555 2608 factory.go:221] Registration of the containerd container factory successfully May 8 00:08:38.632837 kubelet[2608]: E0508 00:08:38.632745 2608 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:08:38.636419 kubelet[2608]: I0508 00:08:38.636372 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:08:38.638148 kubelet[2608]: I0508 00:08:38.638109 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:08:38.638148 kubelet[2608]: I0508 00:08:38.638144 2608 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:08:38.638246 kubelet[2608]: I0508 00:08:38.638165 2608 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:08:38.638246 kubelet[2608]: E0508 00:08:38.638210 2608 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:08:38.679182 kubelet[2608]: I0508 00:08:38.679108 2608 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:08:38.679182 kubelet[2608]: I0508 00:08:38.679130 2608 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:08:38.679182 kubelet[2608]: I0508 00:08:38.679152 2608 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:38.679384 kubelet[2608]: I0508 00:08:38.679297 2608 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:08:38.679384 kubelet[2608]: I0508 00:08:38.679308 2608 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:08:38.679384 kubelet[2608]: I0508 00:08:38.679326 2608 policy_none.go:49] "None policy: Start" May 8 00:08:38.680149 kubelet[2608]: I0508 00:08:38.680124 2608 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:08:38.680149 kubelet[2608]: I0508 00:08:38.680150 2608 state_mem.go:35] "Initializing new in-memory state store" May 8 00:08:38.680296 kubelet[2608]: I0508 00:08:38.680275 2608 state_mem.go:75] "Updated machine memory state" May 8 00:08:38.685068 kubelet[2608]: I0508 00:08:38.685032 2608 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:08:38.685377 kubelet[2608]: I0508 00:08:38.685301 2608 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:08:38.685377 kubelet[2608]: I0508 00:08:38.685324 2608 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:08:38.685609 kubelet[2608]: I0508 00:08:38.685570 2608 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:08:38.792425 kubelet[2608]: I0508 00:08:38.792294 2608 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:38.829943 kubelet[2608]: I0508 00:08:38.829894 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:38.829943 kubelet[2608]: I0508 00:08:38.829940 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:08:38.829943 kubelet[2608]: I0508 00:08:38.829959 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:38.830161 kubelet[2608]: I0508 00:08:38.829975 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/717c4a9af07d76113a41884f1a76de84-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"717c4a9af07d76113a41884f1a76de84\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:38.830161 kubelet[2608]: I0508 00:08:38.829991 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/717c4a9af07d76113a41884f1a76de84-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"717c4a9af07d76113a41884f1a76de84\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:38.830161 kubelet[2608]: I0508 00:08:38.830006 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:38.830161 kubelet[2608]: I0508 00:08:38.830018 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:38.830161 kubelet[2608]: I0508 00:08:38.830031 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:38.830298 kubelet[2608]: I0508 00:08:38.830045 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/717c4a9af07d76113a41884f1a76de84-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"717c4a9af07d76113a41884f1a76de84\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:38.916086 kubelet[2608]: I0508 00:08:38.916011 2608 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:08:38.916504 kubelet[2608]: I0508 00:08:38.916468 2608 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:08:39.214019 kubelet[2608]: E0508 00:08:39.213851 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:39.217801 kubelet[2608]: E0508 00:08:39.217756 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:39.217801 kubelet[2608]: E0508 00:08:39.217784 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:39.611692 kubelet[2608]: I0508 00:08:39.611632 2608 apiserver.go:52] "Watching apiserver" May 8 00:08:39.626267 kubelet[2608]: I0508 00:08:39.626210 2608 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:08:39.658133 kubelet[2608]: E0508 00:08:39.657819 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:39.658133 kubelet[2608]: E0508 00:08:39.657871 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:39.658133 kubelet[2608]: E0508 00:08:39.658069 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:39.742740 kubelet[2608]: I0508 00:08:39.742669 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7426050119999998 podStartE2EDuration="1.742605012s" podCreationTimestamp="2025-05-08 00:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:39.742359355 +0000 UTC m=+1.207186972" watchObservedRunningTime="2025-05-08 00:08:39.742605012 +0000 UTC m=+1.207432629" May 8 00:08:39.742964 kubelet[2608]: I0508 00:08:39.742807 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.742802967 podStartE2EDuration="1.742802967s" podCreationTimestamp="2025-05-08 00:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:39.733625898 +0000 UTC m=+1.198453515" watchObservedRunningTime="2025-05-08 00:08:39.742802967 +0000 UTC m=+1.207630584" May 8 00:08:39.752969 kubelet[2608]: I0508 00:08:39.752895 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.752868613 podStartE2EDuration="1.752868613s" podCreationTimestamp="2025-05-08 00:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:39.752543506 +0000 UTC m=+1.217371123" watchObservedRunningTime="2025-05-08 00:08:39.752868613 +0000 UTC m=+1.217696230" May 8 00:08:40.659346 kubelet[2608]: E0508 00:08:40.659286 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:41.318178 update_engine[1498]: I20250508 00:08:41.318019 1498 update_attempter.cc:509] Updating boot flags... May 8 00:08:41.361671 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2672) May 8 00:08:41.436752 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2675) May 8 00:08:41.490699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2675) May 8 00:08:42.604709 kubelet[2608]: I0508 00:08:42.604659 2608 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:08:42.605168 containerd[1505]: time="2025-05-08T00:08:42.605041374Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:08:42.605524 kubelet[2608]: I0508 00:08:42.605290 2608 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:08:42.833720 kubelet[2608]: E0508 00:08:42.833675 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:43.275611 systemd[1]: Created slice kubepods-besteffort-podc49199c0_df10_4a30_a425_79140977a147.slice - libcontainer container kubepods-besteffort-podc49199c0_df10_4a30_a425_79140977a147.slice. May 8 00:08:43.355332 kubelet[2608]: I0508 00:08:43.355254 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c49199c0-df10-4a30-a425-79140977a147-xtables-lock\") pod \"kube-proxy-c7c89\" (UID: \"c49199c0-df10-4a30-a425-79140977a147\") " pod="kube-system/kube-proxy-c7c89" May 8 00:08:43.355332 kubelet[2608]: I0508 00:08:43.355306 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c49199c0-df10-4a30-a425-79140977a147-lib-modules\") pod \"kube-proxy-c7c89\" (UID: \"c49199c0-df10-4a30-a425-79140977a147\") " pod="kube-system/kube-proxy-c7c89" May 8 00:08:43.355332 kubelet[2608]: I0508 00:08:43.355327 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqwt\" (UniqueName: \"kubernetes.io/projected/c49199c0-df10-4a30-a425-79140977a147-kube-api-access-lvqwt\") pod \"kube-proxy-c7c89\" (UID: \"c49199c0-df10-4a30-a425-79140977a147\") " pod="kube-system/kube-proxy-c7c89" May 8 00:08:43.355332 kubelet[2608]: I0508 00:08:43.355350 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c49199c0-df10-4a30-a425-79140977a147-kube-proxy\") pod \"kube-proxy-c7c89\" (UID: \"c49199c0-df10-4a30-a425-79140977a147\") " pod="kube-system/kube-proxy-c7c89" May 8 00:08:43.585478 kubelet[2608]: E0508 00:08:43.585320 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:43.586053 containerd[1505]: time="2025-05-08T00:08:43.586012001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7c89,Uid:c49199c0-df10-4a30-a425-79140977a147,Namespace:kube-system,Attempt:0,}" May 8 00:08:43.758637 kubelet[2608]: I0508 00:08:43.758018 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hbzp\" (UniqueName: \"kubernetes.io/projected/42ef17f8-7c63-491c-9643-4443ae727149-kube-api-access-6hbzp\") pod \"tigera-operator-6f6897fdc5-5n9bs\" (UID: \"42ef17f8-7c63-491c-9643-4443ae727149\") " pod="tigera-operator/tigera-operator-6f6897fdc5-5n9bs" May 8 00:08:43.760084 systemd[1]: Created slice kubepods-besteffort-pod42ef17f8_7c63_491c_9643_4443ae727149.slice - libcontainer container kubepods-besteffort-pod42ef17f8_7c63_491c_9643_4443ae727149.slice. May 8 00:08:43.762403 kubelet[2608]: I0508 00:08:43.762381 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42ef17f8-7c63-491c-9643-4443ae727149-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-5n9bs\" (UID: \"42ef17f8-7c63-491c-9643-4443ae727149\") " pod="tigera-operator/tigera-operator-6f6897fdc5-5n9bs" May 8 00:08:43.769486 containerd[1505]: time="2025-05-08T00:08:43.769183187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:43.769486 containerd[1505]: time="2025-05-08T00:08:43.769266957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:43.769486 containerd[1505]: time="2025-05-08T00:08:43.769277957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:43.769486 containerd[1505]: time="2025-05-08T00:08:43.769357938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:43.801802 systemd[1]: Started cri-containerd-4be348c7a59cf1dcf8617b68fc56553ccc8fab0c7c7fd92a5db9bd2db4b6d1be.scope - libcontainer container 4be348c7a59cf1dcf8617b68fc56553ccc8fab0c7c7fd92a5db9bd2db4b6d1be. May 8 00:08:43.825660 containerd[1505]: time="2025-05-08T00:08:43.825571392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7c89,Uid:c49199c0-df10-4a30-a425-79140977a147,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be348c7a59cf1dcf8617b68fc56553ccc8fab0c7c7fd92a5db9bd2db4b6d1be\"" May 8 00:08:43.826391 kubelet[2608]: E0508 00:08:43.826362 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:43.829783 containerd[1505]: time="2025-05-08T00:08:43.829740404Z" level=info msg="CreateContainer within sandbox \"4be348c7a59cf1dcf8617b68fc56553ccc8fab0c7c7fd92a5db9bd2db4b6d1be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:08:43.859211 containerd[1505]: time="2025-05-08T00:08:43.859032294Z" level=info msg="CreateContainer within sandbox \"4be348c7a59cf1dcf8617b68fc56553ccc8fab0c7c7fd92a5db9bd2db4b6d1be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aed6bbdf843be16fa0e267fb788fa082931faa6c4b30cadc58bb1bd6d0744f6a\"" May 8 00:08:43.861004 containerd[1505]: time="2025-05-08T00:08:43.859826117Z" level=info msg="StartContainer for \"aed6bbdf843be16fa0e267fb788fa082931faa6c4b30cadc58bb1bd6d0744f6a\"" May 8 00:08:43.895737 systemd[1]: Started cri-containerd-aed6bbdf843be16fa0e267fb788fa082931faa6c4b30cadc58bb1bd6d0744f6a.scope - libcontainer container aed6bbdf843be16fa0e267fb788fa082931faa6c4b30cadc58bb1bd6d0744f6a. May 8 00:08:43.899969 sudo[1690]: pam_unix(sudo:session): session closed for user root May 8 00:08:43.902215 sshd[1689]: Connection closed by 10.0.0.1 port 33928 May 8 00:08:43.903193 sshd-session[1684]: pam_unix(sshd:session): session closed for user core May 8 00:08:43.907755 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:33928.service: Deactivated successfully. May 8 00:08:43.910509 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:08:43.910907 systemd[1]: session-7.scope: Consumed 5.934s CPU time, 215M memory peak. May 8 00:08:43.912789 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. May 8 00:08:43.914629 systemd-logind[1492]: Removed session 7. May 8 00:08:43.935334 containerd[1505]: time="2025-05-08T00:08:43.935287547Z" level=info msg="StartContainer for \"aed6bbdf843be16fa0e267fb788fa082931faa6c4b30cadc58bb1bd6d0744f6a\" returns successfully" May 8 00:08:44.067277 containerd[1505]: time="2025-05-08T00:08:44.067226538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-5n9bs,Uid:42ef17f8-7c63-491c-9643-4443ae727149,Namespace:tigera-operator,Attempt:0,}" May 8 00:08:44.102321 containerd[1505]: time="2025-05-08T00:08:44.102195267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:44.102321 containerd[1505]: time="2025-05-08T00:08:44.102272553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:44.102321 containerd[1505]: time="2025-05-08T00:08:44.102288824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:44.102749 containerd[1505]: time="2025-05-08T00:08:44.102413451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:44.126930 systemd[1]: Started cri-containerd-b91098e2ff36a840829159ce7b2938b0a120e33684454860250155dfbd64a59e.scope - libcontainer container b91098e2ff36a840829159ce7b2938b0a120e33684454860250155dfbd64a59e. May 8 00:08:44.166375 containerd[1505]: time="2025-05-08T00:08:44.166328223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-5n9bs,Uid:42ef17f8-7c63-491c-9643-4443ae727149,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b91098e2ff36a840829159ce7b2938b0a120e33684454860250155dfbd64a59e\"" May 8 00:08:44.167797 containerd[1505]: time="2025-05-08T00:08:44.167770822Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:08:44.248406 kubelet[2608]: E0508 00:08:44.248348 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:44.509034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713798926.mount: Deactivated successfully. May 8 00:08:44.674233 kubelet[2608]: E0508 00:08:44.674179 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:44.674376 kubelet[2608]: E0508 00:08:44.674242 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:44.684818 kubelet[2608]: I0508 00:08:44.684710 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c7c89" podStartSLOduration=1.684689853 podStartE2EDuration="1.684689853s" podCreationTimestamp="2025-05-08 00:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:44.684442365 +0000 UTC m=+6.149270012" watchObservedRunningTime="2025-05-08 00:08:44.684689853 +0000 UTC m=+6.149517470" May 8 00:08:45.641125 kubelet[2608]: E0508 00:08:45.641076 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:45.675944 kubelet[2608]: E0508 00:08:45.675904 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:45.782268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773969583.mount: Deactivated successfully. May 8 00:08:46.100792 containerd[1505]: time="2025-05-08T00:08:46.100723042Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:46.101562 containerd[1505]: time="2025-05-08T00:08:46.101480764Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:08:46.102655 containerd[1505]: time="2025-05-08T00:08:46.102621170Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:46.105083 containerd[1505]: time="2025-05-08T00:08:46.105041123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:46.105836 containerd[1505]: time="2025-05-08T00:08:46.105786552Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.93798389s" May 8 00:08:46.105836 containerd[1505]: time="2025-05-08T00:08:46.105831768Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:08:46.108451 containerd[1505]: time="2025-05-08T00:08:46.108414478Z" level=info msg="CreateContainer within sandbox \"b91098e2ff36a840829159ce7b2938b0a120e33684454860250155dfbd64a59e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:08:46.128724 containerd[1505]: time="2025-05-08T00:08:46.128653861Z" level=info msg="CreateContainer within sandbox \"b91098e2ff36a840829159ce7b2938b0a120e33684454860250155dfbd64a59e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb851a4d4ce45e8fb56d0880365a8ae5e453121aa5abf4d506539987ab3dab9f\"" May 8 00:08:46.129367 containerd[1505]: time="2025-05-08T00:08:46.129334677Z" level=info msg="StartContainer for \"cb851a4d4ce45e8fb56d0880365a8ae5e453121aa5abf4d506539987ab3dab9f\"" May 8 00:08:46.165966 systemd[1]: Started cri-containerd-cb851a4d4ce45e8fb56d0880365a8ae5e453121aa5abf4d506539987ab3dab9f.scope - libcontainer container cb851a4d4ce45e8fb56d0880365a8ae5e453121aa5abf4d506539987ab3dab9f. May 8 00:08:46.200433 containerd[1505]: time="2025-05-08T00:08:46.200367267Z" level=info msg="StartContainer for \"cb851a4d4ce45e8fb56d0880365a8ae5e453121aa5abf4d506539987ab3dab9f\" returns successfully" May 8 00:08:46.816250 kubelet[2608]: I0508 00:08:46.816164 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-5n9bs" podStartSLOduration=1.876675254 podStartE2EDuration="3.816141826s" podCreationTimestamp="2025-05-08 00:08:43 +0000 UTC" firstStartedPulling="2025-05-08 00:08:44.167327905 +0000 UTC m=+5.632155522" lastFinishedPulling="2025-05-08 00:08:46.106794477 +0000 UTC m=+7.571622094" observedRunningTime="2025-05-08 00:08:46.816063028 +0000 UTC m=+8.280890645" watchObservedRunningTime="2025-05-08 00:08:46.816141826 +0000 UTC m=+8.280969444" May 8 00:08:49.199471 kubelet[2608]: I0508 00:08:49.199415 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a7dd032-c25a-4bc5-84f4-a64f06230d93-tigera-ca-bundle\") pod \"calico-typha-959fbcc55-989cf\" (UID: \"4a7dd032-c25a-4bc5-84f4-a64f06230d93\") " pod="calico-system/calico-typha-959fbcc55-989cf" May 8 00:08:49.199471 kubelet[2608]: I0508 00:08:49.199465 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkp5\" (UniqueName: \"kubernetes.io/projected/4a7dd032-c25a-4bc5-84f4-a64f06230d93-kube-api-access-xrkp5\") pod \"calico-typha-959fbcc55-989cf\" (UID: \"4a7dd032-c25a-4bc5-84f4-a64f06230d93\") " pod="calico-system/calico-typha-959fbcc55-989cf" May 8 00:08:49.199471 kubelet[2608]: I0508 00:08:49.199483 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a7dd032-c25a-4bc5-84f4-a64f06230d93-typha-certs\") pod \"calico-typha-959fbcc55-989cf\" (UID: \"4a7dd032-c25a-4bc5-84f4-a64f06230d93\") " pod="calico-system/calico-typha-959fbcc55-989cf" May 8 00:08:49.202233 systemd[1]: Created slice kubepods-besteffort-pod4a7dd032_c25a_4bc5_84f4_a64f06230d93.slice - libcontainer container kubepods-besteffort-pod4a7dd032_c25a_4bc5_84f4_a64f06230d93.slice. May 8 00:08:49.266197 systemd[1]: Created slice kubepods-besteffort-poda1ee7334_3de6_4c8d_bc05_2394ca5cccab.slice - libcontainer container kubepods-besteffort-poda1ee7334_3de6_4c8d_bc05_2394ca5cccab.slice. May 8 00:08:49.300326 kubelet[2608]: I0508 00:08:49.300266 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-var-run-calico\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300326 kubelet[2608]: I0508 00:08:49.300319 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-cni-log-dir\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300564 kubelet[2608]: I0508 00:08:49.300433 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-xtables-lock\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300564 kubelet[2608]: I0508 00:08:49.300453 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-policysync\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300564 kubelet[2608]: I0508 00:08:49.300467 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-var-lib-calico\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300564 kubelet[2608]: I0508 00:08:49.300483 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-lib-modules\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300564 kubelet[2608]: I0508 00:08:49.300502 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-cni-net-dir\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300764 kubelet[2608]: I0508 00:08:49.300555 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brn2\" (UniqueName: \"kubernetes.io/projected/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-kube-api-access-5brn2\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300764 kubelet[2608]: I0508 00:08:49.300577 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-tigera-ca-bundle\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300764 kubelet[2608]: I0508 00:08:49.300608 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-cni-bin-dir\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300764 kubelet[2608]: I0508 00:08:49.300626 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-flexvol-driver-host\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.300764 kubelet[2608]: I0508 00:08:49.300681 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a1ee7334-3de6-4c8d-bc05-2394ca5cccab-node-certs\") pod \"calico-node-mx9z2\" (UID: \"a1ee7334-3de6-4c8d-bc05-2394ca5cccab\") " pod="calico-system/calico-node-mx9z2" May 8 00:08:49.360484 kubelet[2608]: E0508 00:08:49.360425 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:08:49.401960 kubelet[2608]: I0508 00:08:49.401904 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6bkj\" (UniqueName: \"kubernetes.io/projected/d6b5bca2-fe34-4d13-a1a5-1648d982e2b2-kube-api-access-h6bkj\") pod \"csi-node-driver-d66hf\" (UID: \"d6b5bca2-fe34-4d13-a1a5-1648d982e2b2\") " pod="calico-system/csi-node-driver-d66hf" May 8 00:08:49.402163 kubelet[2608]: I0508 00:08:49.401994 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d6b5bca2-fe34-4d13-a1a5-1648d982e2b2-varrun\") pod \"csi-node-driver-d66hf\" (UID: \"d6b5bca2-fe34-4d13-a1a5-1648d982e2b2\") " pod="calico-system/csi-node-driver-d66hf" May 8 00:08:49.402163 kubelet[2608]: I0508 00:08:49.402016 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6b5bca2-fe34-4d13-a1a5-1648d982e2b2-kubelet-dir\") pod \"csi-node-driver-d66hf\" (UID: \"d6b5bca2-fe34-4d13-a1a5-1648d982e2b2\") " pod="calico-system/csi-node-driver-d66hf" May 8 00:08:49.402163 kubelet[2608]: I0508 00:08:49.402088 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6b5bca2-fe34-4d13-a1a5-1648d982e2b2-socket-dir\") pod \"csi-node-driver-d66hf\" (UID: \"d6b5bca2-fe34-4d13-a1a5-1648d982e2b2\") " pod="calico-system/csi-node-driver-d66hf" May 8 00:08:49.402163 kubelet[2608]: I0508 00:08:49.402141 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6b5bca2-fe34-4d13-a1a5-1648d982e2b2-registration-dir\") pod \"csi-node-driver-d66hf\" (UID: \"d6b5bca2-fe34-4d13-a1a5-1648d982e2b2\") " pod="calico-system/csi-node-driver-d66hf" May 8 00:08:49.410350 kubelet[2608]: E0508 00:08:49.410312 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.410350 kubelet[2608]: W0508 00:08:49.410341 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.410528 kubelet[2608]: E0508 00:08:49.410372 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.418701 kubelet[2608]: E0508 00:08:49.418663 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.418701 kubelet[2608]: W0508 00:08:49.418688 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.418916 kubelet[2608]: E0508 00:08:49.418713 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.503130 kubelet[2608]: E0508 00:08:49.503093 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.503130 kubelet[2608]: W0508 00:08:49.503118 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.503290 kubelet[2608]: E0508 00:08:49.503141 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.503545 kubelet[2608]: E0508 00:08:49.503497 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.503610 kubelet[2608]: W0508 00:08:49.503540 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.503610 kubelet[2608]: E0508 00:08:49.503575 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.503919 kubelet[2608]: E0508 00:08:49.503890 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.503919 kubelet[2608]: W0508 00:08:49.503903 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.503919 kubelet[2608]: E0508 00:08:49.503917 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.504171 kubelet[2608]: E0508 00:08:49.504155 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.504171 kubelet[2608]: W0508 00:08:49.504168 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.504235 kubelet[2608]: E0508 00:08:49.504187 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.504423 kubelet[2608]: E0508 00:08:49.504404 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.504423 kubelet[2608]: W0508 00:08:49.504420 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.504481 kubelet[2608]: E0508 00:08:49.504436 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.504802 kubelet[2608]: E0508 00:08:49.504785 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.504848 kubelet[2608]: W0508 00:08:49.504801 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.504848 kubelet[2608]: E0508 00:08:49.504821 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.505087 kubelet[2608]: E0508 00:08:49.505063 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.505087 kubelet[2608]: W0508 00:08:49.505081 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.505135 kubelet[2608]: E0508 00:08:49.505098 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.505443 kubelet[2608]: E0508 00:08:49.505405 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.505473 kubelet[2608]: W0508 00:08:49.505436 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.505502 kubelet[2608]: E0508 00:08:49.505471 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.505729 kubelet[2608]: E0508 00:08:49.505703 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.505729 kubelet[2608]: W0508 00:08:49.505716 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.505729 kubelet[2608]: E0508 00:08:49.505731 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.505949 kubelet[2608]: E0508 00:08:49.505923 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.505949 kubelet[2608]: W0508 00:08:49.505935 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.505949 kubelet[2608]: E0508 00:08:49.505949 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.506225 kubelet[2608]: E0508 00:08:49.506199 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.506225 kubelet[2608]: W0508 00:08:49.506211 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.506225 kubelet[2608]: E0508 00:08:49.506224 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.506459 kubelet[2608]: E0508 00:08:49.506433 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.506459 kubelet[2608]: W0508 00:08:49.506446 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.506459 kubelet[2608]: E0508 00:08:49.506459 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.506755 kubelet[2608]: E0508 00:08:49.506732 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.506789 kubelet[2608]: W0508 00:08:49.506753 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.506789 kubelet[2608]: E0508 00:08:49.506779 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.507008 kubelet[2608]: E0508 00:08:49.506993 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.507008 kubelet[2608]: W0508 00:08:49.507004 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.507076 kubelet[2608]: E0508 00:08:49.507019 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.507076 kubelet[2608]: E0508 00:08:49.507021 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:49.507391 kubelet[2608]: E0508 00:08:49.507204 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.507391 kubelet[2608]: W0508 00:08:49.507219 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.507391 kubelet[2608]: E0508 00:08:49.507231 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.507523 kubelet[2608]: E0508 00:08:49.507497 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.507523 kubelet[2608]: W0508 00:08:49.507518 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.507599 kubelet[2608]: E0508 00:08:49.507536 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.507715 containerd[1505]: time="2025-05-08T00:08:49.507672085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-959fbcc55-989cf,Uid:4a7dd032-c25a-4bc5-84f4-a64f06230d93,Namespace:calico-system,Attempt:0,}" May 8 00:08:49.508069 kubelet[2608]: E0508 00:08:49.507812 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.508069 kubelet[2608]: W0508 00:08:49.507822 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.508069 kubelet[2608]: E0508 00:08:49.507941 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.508153 kubelet[2608]: E0508 00:08:49.508133 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.508153 kubelet[2608]: W0508 00:08:49.508142 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.508217 kubelet[2608]: E0508 00:08:49.508193 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.508516 kubelet[2608]: E0508 00:08:49.508491 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.508516 kubelet[2608]: W0508 00:08:49.508510 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.508598 kubelet[2608]: E0508 00:08:49.508535 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.508884 kubelet[2608]: E0508 00:08:49.508863 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.508913 kubelet[2608]: W0508 00:08:49.508884 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.508939 kubelet[2608]: E0508 00:08:49.508916 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.509240 kubelet[2608]: E0508 00:08:49.509225 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.509240 kubelet[2608]: W0508 00:08:49.509238 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.509298 kubelet[2608]: E0508 00:08:49.509260 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.509570 kubelet[2608]: E0508 00:08:49.509555 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.509570 kubelet[2608]: W0508 00:08:49.509569 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.509698 kubelet[2608]: E0508 00:08:49.509667 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.509871 kubelet[2608]: E0508 00:08:49.509858 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.509905 kubelet[2608]: W0508 00:08:49.509870 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.509905 kubelet[2608]: E0508 00:08:49.509882 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.510110 kubelet[2608]: E0508 00:08:49.510095 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.510110 kubelet[2608]: W0508 00:08:49.510109 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.510173 kubelet[2608]: E0508 00:08:49.510122 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.510382 kubelet[2608]: E0508 00:08:49.510368 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.510382 kubelet[2608]: W0508 00:08:49.510381 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.510442 kubelet[2608]: E0508 00:08:49.510392 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.538616 kubelet[2608]: E0508 00:08:49.538553 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:08:49.538616 kubelet[2608]: W0508 00:08:49.538617 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:08:49.538780 kubelet[2608]: E0508 00:08:49.538649 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:08:49.569686 kubelet[2608]: E0508 00:08:49.569638 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:49.570459 containerd[1505]: time="2025-05-08T00:08:49.570335230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mx9z2,Uid:a1ee7334-3de6-4c8d-bc05-2394ca5cccab,Namespace:calico-system,Attempt:0,}" May 8 00:08:49.614893 containerd[1505]: time="2025-05-08T00:08:49.613638734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:49.614893 containerd[1505]: time="2025-05-08T00:08:49.613945001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:49.614893 containerd[1505]: time="2025-05-08T00:08:49.614121524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:49.614893 containerd[1505]: time="2025-05-08T00:08:49.614344466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:49.621462 containerd[1505]: time="2025-05-08T00:08:49.621306326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:49.621793 containerd[1505]: time="2025-05-08T00:08:49.621457001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:49.621793 containerd[1505]: time="2025-05-08T00:08:49.621494060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:49.621793 containerd[1505]: time="2025-05-08T00:08:49.621665063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:49.640094 systemd[1]: Started cri-containerd-ea9767db2fc0411c0e165af725e097c211dd7ee006c4c29d968ac2c4df747a24.scope - libcontainer container ea9767db2fc0411c0e165af725e097c211dd7ee006c4c29d968ac2c4df747a24. May 8 00:08:49.644467 systemd[1]: Started cri-containerd-3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6.scope - libcontainer container 3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6. May 8 00:08:49.675488 containerd[1505]: time="2025-05-08T00:08:49.675442536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mx9z2,Uid:a1ee7334-3de6-4c8d-bc05-2394ca5cccab,Namespace:calico-system,Attempt:0,} returns sandbox id \"3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6\"" May 8 00:08:49.676795 kubelet[2608]: E0508 00:08:49.676760 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:49.680089 containerd[1505]: time="2025-05-08T00:08:49.679954502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:08:49.692691 containerd[1505]: time="2025-05-08T00:08:49.692634977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-959fbcc55-989cf,Uid:4a7dd032-c25a-4bc5-84f4-a64f06230d93,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea9767db2fc0411c0e165af725e097c211dd7ee006c4c29d968ac2c4df747a24\"" May 8 00:08:49.693492 kubelet[2608]: E0508 00:08:49.693318 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:50.639707 kubelet[2608]: E0508 00:08:50.639613 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:08:51.041884 containerd[1505]: time="2025-05-08T00:08:51.041791639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:51.042705 containerd[1505]: time="2025-05-08T00:08:51.042618840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:08:51.043946 containerd[1505]: time="2025-05-08T00:08:51.043875180Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:51.046189 containerd[1505]: time="2025-05-08T00:08:51.046137416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:51.046771 containerd[1505]: time="2025-05-08T00:08:51.046726236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.366626049s" May 8 00:08:51.046771 containerd[1505]: time="2025-05-08T00:08:51.046762756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:08:51.047790 containerd[1505]: time="2025-05-08T00:08:51.047753925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:08:51.049265 containerd[1505]: time="2025-05-08T00:08:51.049232193Z" level=info msg="CreateContainer within sandbox \"3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:08:51.069397 containerd[1505]: time="2025-05-08T00:08:51.069340909Z" level=info msg="CreateContainer within sandbox \"3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c\"" May 8 00:08:51.070183 containerd[1505]: time="2025-05-08T00:08:51.070098477Z" level=info msg="StartContainer for \"6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c\"" May 8 00:08:51.110918 systemd[1]: Started cri-containerd-6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c.scope - libcontainer container 6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c. May 8 00:08:51.168773 systemd[1]: cri-containerd-6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c.scope: Deactivated successfully. May 8 00:08:51.169145 systemd[1]: cri-containerd-6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c.scope: Consumed 44ms CPU time, 8.4M memory peak, 6.3M written to disk. May 8 00:08:51.462541 containerd[1505]: time="2025-05-08T00:08:51.462322259Z" level=info msg="StartContainer for \"6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c\" returns successfully" May 8 00:08:51.489690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c-rootfs.mount: Deactivated successfully. May 8 00:08:51.496271 containerd[1505]: time="2025-05-08T00:08:51.496199934Z" level=info msg="shim disconnected" id=6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c namespace=k8s.io May 8 00:08:51.496271 containerd[1505]: time="2025-05-08T00:08:51.496262551Z" level=warning msg="cleaning up after shim disconnected" id=6ac3bb433352cf1fd46eb2975a18686b23dc0114ccac2135d1e357fa60bf7d5c namespace=k8s.io May 8 00:08:51.496271 containerd[1505]: time="2025-05-08T00:08:51.496271077Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:51.692676 kubelet[2608]: E0508 00:08:51.692636 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:52.639089 kubelet[2608]: E0508 00:08:52.639003 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:08:52.838717 kubelet[2608]: E0508 00:08:52.838631 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:53.755619 kubelet[2608]: E0508 00:08:53.755524 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:53.985540 containerd[1505]: time="2025-05-08T00:08:53.985477222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:53.986868 containerd[1505]: time="2025-05-08T00:08:53.986816315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:08:53.988685 containerd[1505]: time="2025-05-08T00:08:53.988611539Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:53.991871 containerd[1505]: time="2025-05-08T00:08:53.991800680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:53.992825 containerd[1505]: time="2025-05-08T00:08:53.992764106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.944968813s" May 8 00:08:53.992825 containerd[1505]: time="2025-05-08T00:08:53.992804411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:08:53.994068 containerd[1505]: time="2025-05-08T00:08:53.994021215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:08:54.005033 containerd[1505]: time="2025-05-08T00:08:54.004980758Z" level=info msg="CreateContainer within sandbox \"ea9767db2fc0411c0e165af725e097c211dd7ee006c4c29d968ac2c4df747a24\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:08:54.025040 containerd[1505]: time="2025-05-08T00:08:54.024893459Z" level=info msg="CreateContainer within sandbox \"ea9767db2fc0411c0e165af725e097c211dd7ee006c4c29d968ac2c4df747a24\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7e12f93242e57c7b5cb70dc22a668c642047c169def2d52bd3c54a6bbfbcdc5\"" May 8 00:08:54.025718 containerd[1505]: time="2025-05-08T00:08:54.025597846Z" level=info msg="StartContainer for \"c7e12f93242e57c7b5cb70dc22a668c642047c169def2d52bd3c54a6bbfbcdc5\"" May 8 00:08:54.060907 systemd[1]: Started cri-containerd-c7e12f93242e57c7b5cb70dc22a668c642047c169def2d52bd3c54a6bbfbcdc5.scope - libcontainer container c7e12f93242e57c7b5cb70dc22a668c642047c169def2d52bd3c54a6bbfbcdc5. May 8 00:08:54.115179 containerd[1505]: time="2025-05-08T00:08:54.115126719Z" level=info msg="StartContainer for \"c7e12f93242e57c7b5cb70dc22a668c642047c169def2d52bd3c54a6bbfbcdc5\" returns successfully" May 8 00:08:54.639456 kubelet[2608]: E0508 00:08:54.639369 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:08:54.758210 kubelet[2608]: E0508 00:08:54.758170 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:54.769982 kubelet[2608]: I0508 00:08:54.769723 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-959fbcc55-989cf" podStartSLOduration=1.469754142 podStartE2EDuration="5.769701515s" podCreationTimestamp="2025-05-08 00:08:49 +0000 UTC" firstStartedPulling="2025-05-08 00:08:49.693861952 +0000 UTC m=+11.158689569" lastFinishedPulling="2025-05-08 00:08:53.993809325 +0000 UTC m=+15.458636942" observedRunningTime="2025-05-08 00:08:54.769691486 +0000 UTC m=+16.234519103" watchObservedRunningTime="2025-05-08 00:08:54.769701515 +0000 UTC m=+16.234529132" May 8 00:08:55.760243 kubelet[2608]: I0508 00:08:55.760194 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:08:55.760760 kubelet[2608]: E0508 00:08:55.760726 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:08:56.639483 kubelet[2608]: E0508 00:08:56.639416 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:08:58.640794 kubelet[2608]: E0508 00:08:58.640711 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:09:00.695680 kubelet[2608]: E0508 00:09:00.695535 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:09:00.721167 containerd[1505]: time="2025-05-08T00:09:00.721077762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:00.723837 containerd[1505]: time="2025-05-08T00:09:00.723669408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:09:00.725089 containerd[1505]: time="2025-05-08T00:09:00.725041990Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:00.727918 containerd[1505]: time="2025-05-08T00:09:00.727824025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:00.728631 containerd[1505]: time="2025-05-08T00:09:00.728570890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.734506433s" May 8 00:09:00.728631 containerd[1505]: time="2025-05-08T00:09:00.728620443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:09:00.731746 containerd[1505]: time="2025-05-08T00:09:00.731683205Z" level=info msg="CreateContainer within sandbox \"3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:09:00.752500 containerd[1505]: time="2025-05-08T00:09:00.752427285Z" level=info msg="CreateContainer within sandbox \"3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f\"" May 8 00:09:00.753275 containerd[1505]: time="2025-05-08T00:09:00.753211451Z" level=info msg="StartContainer for \"f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f\"" May 8 00:09:00.810915 systemd[1]: Started cri-containerd-f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f.scope - libcontainer container f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f. May 8 00:09:00.853048 containerd[1505]: time="2025-05-08T00:09:00.852979353Z" level=info msg="StartContainer for \"f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f\" returns successfully" May 8 00:09:01.787393 kubelet[2608]: E0508 00:09:01.784938 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:02.061880 containerd[1505]: time="2025-05-08T00:09:02.061541641Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:09:02.065533 systemd[1]: cri-containerd-f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f.scope: Deactivated successfully. May 8 00:09:02.065996 systemd[1]: cri-containerd-f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f.scope: Consumed 720ms CPU time, 160.7M memory peak, 8K read from disk, 154M written to disk. May 8 00:09:02.088999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f-rootfs.mount: Deactivated successfully. May 8 00:09:02.124284 kubelet[2608]: I0508 00:09:02.124223 2608 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:09:02.273461 systemd[1]: Created slice kubepods-besteffort-podb95f31cd_1dee_4344_bda0_406b9d8df019.slice - libcontainer container kubepods-besteffort-podb95f31cd_1dee_4344_bda0_406b9d8df019.slice. May 8 00:09:02.305883 kubelet[2608]: I0508 00:09:02.305827 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b95f31cd-1dee-4344-bda0-406b9d8df019-calico-apiserver-certs\") pod \"calico-apiserver-bc8f4fc5f-xrvcn\" (UID: \"b95f31cd-1dee-4344-bda0-406b9d8df019\") " pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:02.305883 kubelet[2608]: I0508 00:09:02.305882 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvb6n\" (UniqueName: \"kubernetes.io/projected/b95f31cd-1dee-4344-bda0-406b9d8df019-kube-api-access-kvb6n\") pod \"calico-apiserver-bc8f4fc5f-xrvcn\" (UID: \"b95f31cd-1dee-4344-bda0-406b9d8df019\") " pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:02.406607 kubelet[2608]: I0508 00:09:02.406436 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9bl\" (UniqueName: \"kubernetes.io/projected/5bca3bbf-f7d4-44b0-9686-15081255aefa-kube-api-access-sm9bl\") pod \"coredns-6f6b679f8f-w95np\" (UID: \"5bca3bbf-f7d4-44b0-9686-15081255aefa\") " pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:02.406607 kubelet[2608]: I0508 00:09:02.406488 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52x2g\" (UniqueName: \"kubernetes.io/projected/5f9ac193-cc58-42e7-b80a-b5e62d33d96a-kube-api-access-52x2g\") pod \"calico-kube-controllers-57974f499f-vv54k\" (UID: \"5f9ac193-cc58-42e7-b80a-b5e62d33d96a\") " pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:02.406607 kubelet[2608]: I0508 00:09:02.406514 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08fdba5b-86cd-42fd-89b0-e9a10ac8f063-config-volume\") pod \"coredns-6f6b679f8f-t4grq\" (UID: \"08fdba5b-86cd-42fd-89b0-e9a10ac8f063\") " pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:02.406607 kubelet[2608]: I0508 00:09:02.406534 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4rr\" (UniqueName: \"kubernetes.io/projected/08fdba5b-86cd-42fd-89b0-e9a10ac8f063-kube-api-access-kt4rr\") pod \"coredns-6f6b679f8f-t4grq\" (UID: \"08fdba5b-86cd-42fd-89b0-e9a10ac8f063\") " pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:02.406607 kubelet[2608]: I0508 00:09:02.406553 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bca3bbf-f7d4-44b0-9686-15081255aefa-config-volume\") pod \"coredns-6f6b679f8f-w95np\" (UID: \"5bca3bbf-f7d4-44b0-9686-15081255aefa\") " pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:02.406860 kubelet[2608]: I0508 00:09:02.406575 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f9ac193-cc58-42e7-b80a-b5e62d33d96a-tigera-ca-bundle\") pod \"calico-kube-controllers-57974f499f-vv54k\" (UID: \"5f9ac193-cc58-42e7-b80a-b5e62d33d96a\") " pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:02.406860 kubelet[2608]: I0508 00:09:02.406672 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2fb4029-1146-49b1-8115-09528e7b165f-calico-apiserver-certs\") pod \"calico-apiserver-bc8f4fc5f-nrdtq\" (UID: \"d2fb4029-1146-49b1-8115-09528e7b165f\") " pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:02.407717 kubelet[2608]: I0508 00:09:02.407697 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74f6\" (UniqueName: \"kubernetes.io/projected/d2fb4029-1146-49b1-8115-09528e7b165f-kube-api-access-c74f6\") pod \"calico-apiserver-bc8f4fc5f-nrdtq\" (UID: \"d2fb4029-1146-49b1-8115-09528e7b165f\") " pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:02.628334 containerd[1505]: time="2025-05-08T00:09:02.628073069Z" level=info msg="shim disconnected" id=f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f namespace=k8s.io May 8 00:09:02.628334 containerd[1505]: time="2025-05-08T00:09:02.628139935Z" level=warning msg="cleaning up after shim disconnected" id=f2f03059eab9edf7b9e796ca1438f059992566daab3e8922da36e1c31944850f namespace=k8s.io May 8 00:09:02.628334 containerd[1505]: time="2025-05-08T00:09:02.628148902Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:02.632027 systemd[1]: Created slice kubepods-besteffort-pod5f9ac193_cc58_42e7_b80a_b5e62d33d96a.slice - libcontainer container kubepods-besteffort-pod5f9ac193_cc58_42e7_b80a_b5e62d33d96a.slice. May 8 00:09:02.656230 containerd[1505]: time="2025-05-08T00:09:02.654705629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:0,}" May 8 00:09:02.658471 systemd[1]: Created slice kubepods-burstable-pod5bca3bbf_f7d4_44b0_9686_15081255aefa.slice - libcontainer container kubepods-burstable-pod5bca3bbf_f7d4_44b0_9686_15081255aefa.slice. May 8 00:09:02.663863 kubelet[2608]: E0508 00:09:02.663779 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:02.665139 containerd[1505]: time="2025-05-08T00:09:02.665102534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:0,}" May 8 00:09:02.668374 systemd[1]: Created slice kubepods-burstable-pod08fdba5b_86cd_42fd_89b0_e9a10ac8f063.slice - libcontainer container kubepods-burstable-pod08fdba5b_86cd_42fd_89b0_e9a10ac8f063.slice. May 8 00:09:02.673375 kubelet[2608]: E0508 00:09:02.673334 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:02.674094 containerd[1505]: time="2025-05-08T00:09:02.674043830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:0,}" May 8 00:09:02.681688 systemd[1]: Created slice kubepods-besteffort-podd2fb4029_1146_49b1_8115_09528e7b165f.slice - libcontainer container kubepods-besteffort-podd2fb4029_1146_49b1_8115_09528e7b165f.slice. May 8 00:09:02.685169 containerd[1505]: time="2025-05-08T00:09:02.685115363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:0,}" May 8 00:09:02.687780 systemd[1]: Created slice kubepods-besteffort-podd6b5bca2_fe34_4d13_a1a5_1648d982e2b2.slice - libcontainer container kubepods-besteffort-podd6b5bca2_fe34_4d13_a1a5_1648d982e2b2.slice. May 8 00:09:02.690544 containerd[1505]: time="2025-05-08T00:09:02.690307329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:0,}" May 8 00:09:02.794868 kubelet[2608]: E0508 00:09:02.794818 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:02.800481 containerd[1505]: time="2025-05-08T00:09:02.800443171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:09:02.816105 containerd[1505]: time="2025-05-08T00:09:02.816054012Z" level=error msg="Failed to destroy network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.816853 containerd[1505]: time="2025-05-08T00:09:02.816748718Z" level=error msg="encountered an error cleaning up failed sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.816853 containerd[1505]: time="2025-05-08T00:09:02.816810716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.817206 kubelet[2608]: E0508 00:09:02.817161 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.817284 kubelet[2608]: E0508 00:09:02.817261 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:02.817322 kubelet[2608]: E0508 00:09:02.817290 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:02.817660 kubelet[2608]: E0508 00:09:02.817346 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t4grq" podUID="08fdba5b-86cd-42fd-89b0-e9a10ac8f063" May 8 00:09:02.819324 containerd[1505]: time="2025-05-08T00:09:02.819289838Z" level=error msg="Failed to destroy network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.819703 containerd[1505]: time="2025-05-08T00:09:02.819676245Z" level=error msg="encountered an error cleaning up failed sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.827205 containerd[1505]: time="2025-05-08T00:09:02.826943593Z" level=error msg="Failed to destroy network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.827708 containerd[1505]: time="2025-05-08T00:09:02.827627500Z" level=error msg="encountered an error cleaning up failed sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.827708 containerd[1505]: time="2025-05-08T00:09:02.827666663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.828045 kubelet[2608]: E0508 00:09:02.828013 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.828298 containerd[1505]: time="2025-05-08T00:09:02.828153418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.828562 kubelet[2608]: E0508 00:09:02.828229 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:02.828562 kubelet[2608]: E0508 00:09:02.828261 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:02.829952 kubelet[2608]: E0508 00:09:02.828789 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.830071 kubelet[2608]: E0508 00:09:02.830047 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:02.830129 kubelet[2608]: E0508 00:09:02.830072 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:02.830175 kubelet[2608]: E0508 00:09:02.830137 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" podUID="5f9ac193-cc58-42e7-b80a-b5e62d33d96a" May 8 00:09:02.830267 kubelet[2608]: E0508 00:09:02.830234 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-w95np" podUID="5bca3bbf-f7d4-44b0-9686-15081255aefa" May 8 00:09:02.843502 containerd[1505]: time="2025-05-08T00:09:02.843454086Z" level=error msg="Failed to destroy network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.844028 containerd[1505]: time="2025-05-08T00:09:02.843998360Z" level=error msg="encountered an error cleaning up failed sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.844072 containerd[1505]: time="2025-05-08T00:09:02.844052752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.844331 kubelet[2608]: E0508 00:09:02.844279 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.844396 kubelet[2608]: E0508 00:09:02.844355 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:02.844396 kubelet[2608]: E0508 00:09:02.844377 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:02.844456 kubelet[2608]: E0508 00:09:02.844430 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:09:02.865569 containerd[1505]: time="2025-05-08T00:09:02.865507534Z" level=error msg="Failed to destroy network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.865964 containerd[1505]: time="2025-05-08T00:09:02.865931681Z" level=error msg="encountered an error cleaning up failed sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.866022 containerd[1505]: time="2025-05-08T00:09:02.865994851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.866262 kubelet[2608]: E0508 00:09:02.866210 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.866326 kubelet[2608]: E0508 00:09:02.866282 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:02.866326 kubelet[2608]: E0508 00:09:02.866301 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:02.866382 kubelet[2608]: E0508 00:09:02.866347 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" podUID="d2fb4029-1146-49b1-8115-09528e7b165f" May 8 00:09:02.879890 containerd[1505]: time="2025-05-08T00:09:02.879846683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:0,}" May 8 00:09:02.946182 containerd[1505]: time="2025-05-08T00:09:02.946041081Z" level=error msg="Failed to destroy network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.946687 containerd[1505]: time="2025-05-08T00:09:02.946602877Z" level=error msg="encountered an error cleaning up failed sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.946741 containerd[1505]: time="2025-05-08T00:09:02.946678419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.947055 kubelet[2608]: E0508 00:09:02.946986 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:02.947108 kubelet[2608]: E0508 00:09:02.947074 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:02.947144 kubelet[2608]: E0508 00:09:02.947102 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:02.947201 kubelet[2608]: E0508 00:09:02.947161 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" podUID="b95f31cd-1dee-4344-bda0-406b9d8df019" May 8 00:09:03.797601 kubelet[2608]: I0508 00:09:03.797534 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195" May 8 00:09:03.798478 containerd[1505]: time="2025-05-08T00:09:03.798431448Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:03.799013 containerd[1505]: time="2025-05-08T00:09:03.798719699Z" level=info msg="Ensure that sandbox 40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195 in task-service has been cleanup successfully" May 8 00:09:03.799013 containerd[1505]: time="2025-05-08T00:09:03.798945624Z" level=info msg="TearDown network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" successfully" May 8 00:09:03.799013 containerd[1505]: time="2025-05-08T00:09:03.798962516Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" returns successfully" May 8 00:09:03.799150 kubelet[2608]: I0508 00:09:03.798541 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce" May 8 00:09:03.800066 containerd[1505]: time="2025-05-08T00:09:03.799733686Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:03.800066 containerd[1505]: time="2025-05-08T00:09:03.799915869Z" level=info msg="Ensure that sandbox 9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce in task-service has been cleanup successfully" May 8 00:09:03.800816 containerd[1505]: time="2025-05-08T00:09:03.800710474Z" level=info msg="TearDown network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" successfully" May 8 00:09:03.800816 containerd[1505]: time="2025-05-08T00:09:03.800732715Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" returns successfully" May 8 00:09:03.801986 kubelet[2608]: I0508 00:09:03.801611 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149" May 8 00:09:03.802462 containerd[1505]: time="2025-05-08T00:09:03.802125064Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:03.802462 containerd[1505]: time="2025-05-08T00:09:03.802301184Z" level=info msg="Ensure that sandbox bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149 in task-service has been cleanup successfully" May 8 00:09:03.802827 systemd[1]: run-netns-cni\x2d5332af2f\x2d4129\x2d8e25\x2d34c8\x2d0bae5fbc1d8b.mount: Deactivated successfully. May 8 00:09:03.803289 systemd[1]: run-netns-cni\x2db543bfe1\x2d3f98\x2d13a7\x2d58af\x2d65eecda537a8.mount: Deactivated successfully. May 8 00:09:03.804285 containerd[1505]: time="2025-05-08T00:09:03.803539152Z" level=info msg="TearDown network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" successfully" May 8 00:09:03.804285 containerd[1505]: time="2025-05-08T00:09:03.803562456Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" returns successfully" May 8 00:09:03.804625 containerd[1505]: time="2025-05-08T00:09:03.804573698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:1,}" May 8 00:09:03.805396 containerd[1505]: time="2025-05-08T00:09:03.805366749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:1,}" May 8 00:09:03.805719 containerd[1505]: time="2025-05-08T00:09:03.805698343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:1,}" May 8 00:09:03.806567 kubelet[2608]: I0508 00:09:03.806514 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046" May 8 00:09:03.808419 containerd[1505]: time="2025-05-08T00:09:03.808388511Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:03.808874 containerd[1505]: time="2025-05-08T00:09:03.808567628Z" level=info msg="Ensure that sandbox b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046 in task-service has been cleanup successfully" May 8 00:09:03.809274 containerd[1505]: time="2025-05-08T00:09:03.809184238Z" level=info msg="TearDown network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" successfully" May 8 00:09:03.809274 containerd[1505]: time="2025-05-08T00:09:03.809198264Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" returns successfully" May 8 00:09:03.809459 kubelet[2608]: I0508 00:09:03.809268 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db" May 8 00:09:03.809459 kubelet[2608]: E0508 00:09:03.809422 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:03.809658 systemd[1]: run-netns-cni\x2d90e391b7\x2d54ff\x2d465a\x2dac82\x2dff9b75a03603.mount: Deactivated successfully. May 8 00:09:03.810810 containerd[1505]: time="2025-05-08T00:09:03.810504580Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:03.811863 containerd[1505]: time="2025-05-08T00:09:03.810980897Z" level=info msg="Ensure that sandbox 6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db in task-service has been cleanup successfully" May 8 00:09:03.811863 containerd[1505]: time="2025-05-08T00:09:03.811308743Z" level=info msg="TearDown network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" successfully" May 8 00:09:03.811863 containerd[1505]: time="2025-05-08T00:09:03.811344119Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" returns successfully" May 8 00:09:03.812454 containerd[1505]: time="2025-05-08T00:09:03.812336415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:1,}" May 8 00:09:03.813448 containerd[1505]: time="2025-05-08T00:09:03.812679110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:1,}" May 8 00:09:03.813791 kubelet[2608]: I0508 00:09:03.813761 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a" May 8 00:09:03.814022 systemd[1]: run-netns-cni\x2da9282ccf\x2d6961\x2d8f07\x2dd594\x2d37a9ad33e4fc.mount: Deactivated successfully. May 8 00:09:03.814167 systemd[1]: run-netns-cni\x2d512fb09f\x2d27bc\x2dd616\x2d2c71\x2df865f8220032.mount: Deactivated successfully. May 8 00:09:03.814404 containerd[1505]: time="2025-05-08T00:09:03.814374438Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:03.814700 containerd[1505]: time="2025-05-08T00:09:03.814665275Z" level=info msg="Ensure that sandbox 498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a in task-service has been cleanup successfully" May 8 00:09:03.815070 containerd[1505]: time="2025-05-08T00:09:03.815043496Z" level=info msg="TearDown network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" successfully" May 8 00:09:03.815070 containerd[1505]: time="2025-05-08T00:09:03.815067170Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" returns successfully" May 8 00:09:03.815336 kubelet[2608]: E0508 00:09:03.815312 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:03.816008 containerd[1505]: time="2025-05-08T00:09:03.815957164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:1,}" May 8 00:09:03.818915 systemd[1]: run-netns-cni\x2d377f65a0\x2daaff\x2d0779\x2d9a3a\x2d0bd3685406b1.mount: Deactivated successfully. May 8 00:09:03.954394 containerd[1505]: time="2025-05-08T00:09:03.954255571Z" level=error msg="Failed to destroy network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.955628 containerd[1505]: time="2025-05-08T00:09:03.955134413Z" level=error msg="encountered an error cleaning up failed sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.955628 containerd[1505]: time="2025-05-08T00:09:03.955233370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.955756 kubelet[2608]: E0508 00:09:03.955549 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.955756 kubelet[2608]: E0508 00:09:03.955666 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:03.955756 kubelet[2608]: E0508 00:09:03.955693 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:03.955908 kubelet[2608]: E0508 00:09:03.955757 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" podUID="b95f31cd-1dee-4344-bda0-406b9d8df019" May 8 00:09:03.971307 containerd[1505]: time="2025-05-08T00:09:03.971258234Z" level=error msg="Failed to destroy network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.972003 containerd[1505]: time="2025-05-08T00:09:03.971968891Z" level=error msg="encountered an error cleaning up failed sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.972126 containerd[1505]: time="2025-05-08T00:09:03.972106269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.972720 kubelet[2608]: E0508 00:09:03.972672 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.972921 kubelet[2608]: E0508 00:09:03.972892 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:03.973032 kubelet[2608]: E0508 00:09:03.973009 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:03.974245 kubelet[2608]: E0508 00:09:03.973305 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" podUID="5f9ac193-cc58-42e7-b80a-b5e62d33d96a" May 8 00:09:03.979095 containerd[1505]: time="2025-05-08T00:09:03.979040999Z" level=error msg="Failed to destroy network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.979653 containerd[1505]: time="2025-05-08T00:09:03.979618846Z" level=error msg="encountered an error cleaning up failed sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.979844 containerd[1505]: time="2025-05-08T00:09:03.979785309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.980649 kubelet[2608]: E0508 00:09:03.980167 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.980649 kubelet[2608]: E0508 00:09:03.980257 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:03.980649 kubelet[2608]: E0508 00:09:03.980299 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:03.980941 kubelet[2608]: E0508 00:09:03.980357 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" podUID="d2fb4029-1146-49b1-8115-09528e7b165f" May 8 00:09:03.992785 containerd[1505]: time="2025-05-08T00:09:03.992703021Z" level=error msg="Failed to destroy network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.993323 containerd[1505]: time="2025-05-08T00:09:03.993239179Z" level=error msg="encountered an error cleaning up failed sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.993323 containerd[1505]: time="2025-05-08T00:09:03.993313057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.993658 kubelet[2608]: E0508 00:09:03.993569 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:03.993868 kubelet[2608]: E0508 00:09:03.993671 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:03.993868 kubelet[2608]: E0508 00:09:03.993702 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:03.993868 kubelet[2608]: E0508 00:09:03.993755 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t4grq" podUID="08fdba5b-86cd-42fd-89b0-e9a10ac8f063" May 8 00:09:04.001708 containerd[1505]: time="2025-05-08T00:09:04.001632883Z" level=error msg="Failed to destroy network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.002128 containerd[1505]: time="2025-05-08T00:09:04.002088950Z" level=error msg="encountered an error cleaning up failed sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.002212 containerd[1505]: time="2025-05-08T00:09:04.002154704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.002481 kubelet[2608]: E0508 00:09:04.002438 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.002555 kubelet[2608]: E0508 00:09:04.002509 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:04.002555 kubelet[2608]: E0508 00:09:04.002528 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:04.002676 kubelet[2608]: E0508 00:09:04.002573 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:09:04.004488 containerd[1505]: time="2025-05-08T00:09:04.004439700Z" level=error msg="Failed to destroy network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.021783 containerd[1505]: time="2025-05-08T00:09:04.021721132Z" level=error msg="encountered an error cleaning up failed sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.021928 containerd[1505]: time="2025-05-08T00:09:04.021808777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.022136 kubelet[2608]: E0508 00:09:04.022083 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:04.022189 kubelet[2608]: E0508 00:09:04.022164 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:04.022221 kubelet[2608]: E0508 00:09:04.022192 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:04.022294 kubelet[2608]: E0508 00:09:04.022260 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-w95np" podUID="5bca3bbf-f7d4-44b0-9686-15081255aefa" May 8 00:09:04.689446 kubelet[2608]: I0508 00:09:04.689393 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:09:04.689861 kubelet[2608]: E0508 00:09:04.689828 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:04.817344 kubelet[2608]: I0508 00:09:04.817311 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf" May 8 00:09:04.818086 containerd[1505]: time="2025-05-08T00:09:04.818044842Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" May 8 00:09:04.818575 containerd[1505]: time="2025-05-08T00:09:04.818308298Z" level=info msg="Ensure that sandbox 2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf in task-service has been cleanup successfully" May 8 00:09:04.818575 containerd[1505]: time="2025-05-08T00:09:04.818527009Z" level=info msg="TearDown network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" successfully" May 8 00:09:04.818575 containerd[1505]: time="2025-05-08T00:09:04.818538951Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" returns successfully" May 8 00:09:04.819280 kubelet[2608]: I0508 00:09:04.818966 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8" May 8 00:09:04.819343 containerd[1505]: time="2025-05-08T00:09:04.819055172Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:04.819343 containerd[1505]: time="2025-05-08T00:09:04.819133439Z" level=info msg="TearDown network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" successfully" May 8 00:09:04.819343 containerd[1505]: time="2025-05-08T00:09:04.819143398Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" returns successfully" May 8 00:09:04.819343 containerd[1505]: time="2025-05-08T00:09:04.819321172Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" May 8 00:09:04.819614 containerd[1505]: time="2025-05-08T00:09:04.819569449Z" level=info msg="Ensure that sandbox c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8 in task-service has been cleanup successfully" May 8 00:09:04.819873 kubelet[2608]: E0508 00:09:04.819854 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:04.820023 containerd[1505]: time="2025-05-08T00:09:04.820001111Z" level=info msg="TearDown network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" successfully" May 8 00:09:04.820023 containerd[1505]: time="2025-05-08T00:09:04.820020848Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" returns successfully" May 8 00:09:04.820669 containerd[1505]: time="2025-05-08T00:09:04.820189375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:2,}" May 8 00:09:04.820801 containerd[1505]: time="2025-05-08T00:09:04.820776829Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:04.820864 systemd[1]: run-netns-cni\x2ddcacf81e\x2d6c85\x2d65f2\x2dd0f8\x2db2705d82e329.mount: Deactivated successfully. May 8 00:09:04.821220 containerd[1505]: time="2025-05-08T00:09:04.820873371Z" level=info msg="TearDown network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" successfully" May 8 00:09:04.821220 containerd[1505]: time="2025-05-08T00:09:04.820889651Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" returns successfully" May 8 00:09:04.821269 kubelet[2608]: I0508 00:09:04.821153 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc" May 8 00:09:04.821962 containerd[1505]: time="2025-05-08T00:09:04.821522772Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" May 8 00:09:04.821962 containerd[1505]: time="2025-05-08T00:09:04.821705996Z" level=info msg="Ensure that sandbox db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc in task-service has been cleanup successfully" May 8 00:09:04.822047 containerd[1505]: time="2025-05-08T00:09:04.821964482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:2,}" May 8 00:09:04.822517 containerd[1505]: time="2025-05-08T00:09:04.822494649Z" level=info msg="TearDown network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" successfully" May 8 00:09:04.822517 containerd[1505]: time="2025-05-08T00:09:04.822513364Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" returns successfully" May 8 00:09:04.822758 containerd[1505]: time="2025-05-08T00:09:04.822739078Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:04.823117 containerd[1505]: time="2025-05-08T00:09:04.823068238Z" level=info msg="TearDown network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" successfully" May 8 00:09:04.823117 containerd[1505]: time="2025-05-08T00:09:04.823083627Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" returns successfully" May 8 00:09:04.823964 kubelet[2608]: I0508 00:09:04.823694 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516" May 8 00:09:04.824306 containerd[1505]: time="2025-05-08T00:09:04.824123291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:2,}" May 8 00:09:04.824159 systemd[1]: run-netns-cni\x2da83eddc8\x2dbddf\x2d52ae\x2d2441\x2d461992463b78.mount: Deactivated successfully. May 8 00:09:04.824324 systemd[1]: run-netns-cni\x2de664a796\x2dcb34\x2d17f6\x2d3ce3\x2d935a00366057.mount: Deactivated successfully. May 8 00:09:04.824997 containerd[1505]: time="2025-05-08T00:09:04.824691339Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" May 8 00:09:04.824997 containerd[1505]: time="2025-05-08T00:09:04.824856450Z" level=info msg="Ensure that sandbox 9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516 in task-service has been cleanup successfully" May 8 00:09:04.845893 kubelet[2608]: I0508 00:09:04.845849 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba" May 8 00:09:04.847382 containerd[1505]: time="2025-05-08T00:09:04.846448316Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" May 8 00:09:04.847382 containerd[1505]: time="2025-05-08T00:09:04.847066298Z" level=info msg="Ensure that sandbox 1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba in task-service has been cleanup successfully" May 8 00:09:04.848766 systemd[1]: run-netns-cni\x2d56552663\x2d1d6d\x2dce56\x2d1864\x2d97a66f0cece7.mount: Deactivated successfully. May 8 00:09:04.851535 containerd[1505]: time="2025-05-08T00:09:04.851428259Z" level=info msg="TearDown network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" successfully" May 8 00:09:04.851535 containerd[1505]: time="2025-05-08T00:09:04.851464057Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" returns successfully" May 8 00:09:04.852895 containerd[1505]: time="2025-05-08T00:09:04.852091686Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:04.852895 containerd[1505]: time="2025-05-08T00:09:04.852250675Z" level=info msg="TearDown network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" successfully" May 8 00:09:04.852895 containerd[1505]: time="2025-05-08T00:09:04.852263730Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" returns successfully" May 8 00:09:04.852618 systemd[1]: run-netns-cni\x2dc6682203\x2d1604\x2d10d8\x2db78f\x2dc598212e9e99.mount: Deactivated successfully. May 8 00:09:04.854107 containerd[1505]: time="2025-05-08T00:09:04.853535531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:2,}" May 8 00:09:04.854959 kubelet[2608]: I0508 00:09:04.854930 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd" May 8 00:09:04.855325 kubelet[2608]: E0508 00:09:04.855294 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:04.855437 containerd[1505]: time="2025-05-08T00:09:04.855394576Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" May 8 00:09:04.855736 containerd[1505]: time="2025-05-08T00:09:04.855677077Z" level=info msg="Ensure that sandbox ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd in task-service has been cleanup successfully" May 8 00:09:04.856059 containerd[1505]: time="2025-05-08T00:09:04.856019061Z" level=info msg="TearDown network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" successfully" May 8 00:09:04.856059 containerd[1505]: time="2025-05-08T00:09:04.856054437Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" returns successfully" May 8 00:09:04.856417 containerd[1505]: time="2025-05-08T00:09:04.856370982Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:04.856571 containerd[1505]: time="2025-05-08T00:09:04.856490898Z" level=info msg="TearDown network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" successfully" May 8 00:09:04.856571 containerd[1505]: time="2025-05-08T00:09:04.856507078Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" returns successfully" May 8 00:09:04.857474 containerd[1505]: time="2025-05-08T00:09:04.857439491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:2,}" May 8 00:09:04.859301 containerd[1505]: time="2025-05-08T00:09:04.859139939Z" level=info msg="TearDown network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" successfully" May 8 00:09:04.859301 containerd[1505]: time="2025-05-08T00:09:04.859174023Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" returns successfully" May 8 00:09:04.861835 containerd[1505]: time="2025-05-08T00:09:04.861803958Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:04.874023 containerd[1505]: time="2025-05-08T00:09:04.862627636Z" level=info msg="TearDown network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" successfully" May 8 00:09:04.874101 containerd[1505]: time="2025-05-08T00:09:04.874026367Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" returns successfully" May 8 00:09:04.874405 kubelet[2608]: E0508 00:09:04.874372 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:04.874781 containerd[1505]: time="2025-05-08T00:09:04.874742565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:2,}" May 8 00:09:05.089720 systemd[1]: run-netns-cni\x2d9537a5bb\x2dbe09\x2d34e0\x2d6692\x2d6ba9e687c14b.mount: Deactivated successfully. May 8 00:09:05.424660 containerd[1505]: time="2025-05-08T00:09:05.424475050Z" level=error msg="Failed to destroy network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.425845 containerd[1505]: time="2025-05-08T00:09:05.425595387Z" level=error msg="encountered an error cleaning up failed sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.425845 containerd[1505]: time="2025-05-08T00:09:05.425659517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.431693 kubelet[2608]: E0508 00:09:05.431638 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.432238 kubelet[2608]: E0508 00:09:05.431918 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:05.439049 kubelet[2608]: E0508 00:09:05.438995 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:05.440816 kubelet[2608]: E0508 00:09:05.440774 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-w95np" podUID="5bca3bbf-f7d4-44b0-9686-15081255aefa" May 8 00:09:05.455794 containerd[1505]: time="2025-05-08T00:09:05.455722312Z" level=error msg="Failed to destroy network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.457013 containerd[1505]: time="2025-05-08T00:09:05.456972432Z" level=error msg="encountered an error cleaning up failed sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.457550 containerd[1505]: time="2025-05-08T00:09:05.457525813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.457980 kubelet[2608]: E0508 00:09:05.457944 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.458244 kubelet[2608]: E0508 00:09:05.458125 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:05.458244 kubelet[2608]: E0508 00:09:05.458188 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:05.459379 kubelet[2608]: E0508 00:09:05.458849 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" podUID="5f9ac193-cc58-42e7-b80a-b5e62d33d96a" May 8 00:09:05.461369 containerd[1505]: time="2025-05-08T00:09:05.461343229Z" level=error msg="Failed to destroy network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.462881 containerd[1505]: time="2025-05-08T00:09:05.462837989Z" level=error msg="encountered an error cleaning up failed sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.463538 containerd[1505]: time="2025-05-08T00:09:05.462979033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.463923 kubelet[2608]: E0508 00:09:05.463775 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.463923 kubelet[2608]: E0508 00:09:05.463882 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:05.463923 kubelet[2608]: E0508 00:09:05.463900 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:05.465430 kubelet[2608]: E0508 00:09:05.464207 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" podUID="b95f31cd-1dee-4344-bda0-406b9d8df019" May 8 00:09:05.470430 containerd[1505]: time="2025-05-08T00:09:05.470338218Z" level=error msg="Failed to destroy network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.470692 containerd[1505]: time="2025-05-08T00:09:05.470642781Z" level=error msg="Failed to destroy network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.471085 containerd[1505]: time="2025-05-08T00:09:05.471048273Z" level=error msg="encountered an error cleaning up failed sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.471085 containerd[1505]: time="2025-05-08T00:09:05.471104248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.471370 containerd[1505]: time="2025-05-08T00:09:05.471059314Z" level=error msg="encountered an error cleaning up failed sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.471370 containerd[1505]: time="2025-05-08T00:09:05.471183818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.471422 kubelet[2608]: E0508 00:09:05.471386 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.471469 kubelet[2608]: E0508 00:09:05.471426 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:05.471469 kubelet[2608]: E0508 00:09:05.471444 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:05.471557 kubelet[2608]: E0508 00:09:05.471477 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.471557 kubelet[2608]: E0508 00:09:05.471540 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t4grq" podUID="08fdba5b-86cd-42fd-89b0-e9a10ac8f063" May 8 00:09:05.471651 kubelet[2608]: E0508 00:09:05.471620 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:05.471651 kubelet[2608]: E0508 00:09:05.471641 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:05.471715 kubelet[2608]: E0508 00:09:05.471690 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" podUID="d2fb4029-1146-49b1-8115-09528e7b165f" May 8 00:09:05.473154 containerd[1505]: time="2025-05-08T00:09:05.473124817Z" level=error msg="Failed to destroy network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.473479 containerd[1505]: time="2025-05-08T00:09:05.473454887Z" level=error msg="encountered an error cleaning up failed sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.473526 containerd[1505]: time="2025-05-08T00:09:05.473493010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.473677 kubelet[2608]: E0508 00:09:05.473653 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:05.473721 kubelet[2608]: E0508 00:09:05.473687 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:05.473721 kubelet[2608]: E0508 00:09:05.473703 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:05.473776 kubelet[2608]: E0508 00:09:05.473736 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:09:06.092134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a-shm.mount: Deactivated successfully. May 8 00:09:06.092289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110-shm.mount: Deactivated successfully. May 8 00:09:06.654011 kubelet[2608]: I0508 00:09:06.653968 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856" May 8 00:09:06.654746 kubelet[2608]: I0508 00:09:06.654680 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110" May 8 00:09:06.655788 kubelet[2608]: I0508 00:09:06.655748 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05" May 8 00:09:06.656488 containerd[1505]: time="2025-05-08T00:09:06.656407477Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\"" May 8 00:09:06.659618 containerd[1505]: time="2025-05-08T00:09:06.657214854Z" level=info msg="Ensure that sandbox 1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110 in task-service has been cleanup successfully" May 8 00:09:06.660180 systemd[1]: run-netns-cni\x2de6b6f7bc\x2db339\x2dd1b7\x2de3dc\x2d98826af038ac.mount: Deactivated successfully. May 8 00:09:06.660304 containerd[1505]: time="2025-05-08T00:09:06.660282601Z" level=info msg="TearDown network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" successfully" May 8 00:09:06.660334 containerd[1505]: time="2025-05-08T00:09:06.660308569Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" returns successfully" May 8 00:09:06.660971 containerd[1505]: time="2025-05-08T00:09:06.660549472Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\"" May 8 00:09:06.660971 containerd[1505]: time="2025-05-08T00:09:06.660818418Z" level=info msg="Ensure that sandbox 1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05 in task-service has been cleanup successfully" May 8 00:09:06.660971 containerd[1505]: time="2025-05-08T00:09:06.660941519Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" May 8 00:09:06.661513 containerd[1505]: time="2025-05-08T00:09:06.661013224Z" level=info msg="TearDown network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" successfully" May 8 00:09:06.661513 containerd[1505]: time="2025-05-08T00:09:06.661024205Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" returns successfully" May 8 00:09:06.661513 containerd[1505]: time="2025-05-08T00:09:06.661071614Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\"" May 8 00:09:06.661513 containerd[1505]: time="2025-05-08T00:09:06.661175889Z" level=info msg="TearDown network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" successfully" May 8 00:09:06.661513 containerd[1505]: time="2025-05-08T00:09:06.661194294Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" returns successfully" May 8 00:09:06.661513 containerd[1505]: time="2025-05-08T00:09:06.661226826Z" level=info msg="Ensure that sandbox 4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856 in task-service has been cleanup successfully" May 8 00:09:06.662067 containerd[1505]: time="2025-05-08T00:09:06.661691348Z" level=info msg="TearDown network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" successfully" May 8 00:09:06.662067 containerd[1505]: time="2025-05-08T00:09:06.661717497Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" returns successfully" May 8 00:09:06.662067 containerd[1505]: time="2025-05-08T00:09:06.661748797Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" May 8 00:09:06.662067 containerd[1505]: time="2025-05-08T00:09:06.662023072Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" May 8 00:09:06.662212 containerd[1505]: time="2025-05-08T00:09:06.662096480Z" level=info msg="TearDown network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" successfully" May 8 00:09:06.662212 containerd[1505]: time="2025-05-08T00:09:06.662105046Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" returns successfully" May 8 00:09:06.663339 kubelet[2608]: I0508 00:09:06.662326 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c" May 8 00:09:06.663435 containerd[1505]: time="2025-05-08T00:09:06.662536407Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:06.663435 containerd[1505]: time="2025-05-08T00:09:06.662813267Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\"" May 8 00:09:06.663435 containerd[1505]: time="2025-05-08T00:09:06.662958850Z" level=info msg="Ensure that sandbox 689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c in task-service has been cleanup successfully" May 8 00:09:06.663435 containerd[1505]: time="2025-05-08T00:09:06.663279624Z" level=info msg="TearDown network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" successfully" May 8 00:09:06.663435 containerd[1505]: time="2025-05-08T00:09:06.663295113Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" returns successfully" May 8 00:09:06.666319 systemd[1]: run-netns-cni\x2d7c541660\x2d7aff\x2d35e8\x2dca79\x2d8500202dcc4d.mount: Deactivated successfully. May 8 00:09:06.666650 systemd[1]: run-netns-cni\x2d76b53594\x2d62fe\x2d8322\x2ddefe\x2d5acfa8d7406c.mount: Deactivated successfully. May 8 00:09:06.666832 systemd[1]: run-netns-cni\x2d147d69a0\x2db79e\x2d0632\x2d76da\x2d7fd3680b391e.mount: Deactivated successfully. May 8 00:09:06.667837 containerd[1505]: time="2025-05-08T00:09:06.667688290Z" level=info msg="TearDown network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" successfully" May 8 00:09:06.667837 containerd[1505]: time="2025-05-08T00:09:06.667753783Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" returns successfully" May 8 00:09:06.668412 containerd[1505]: time="2025-05-08T00:09:06.668351908Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" May 8 00:09:06.669104 containerd[1505]: time="2025-05-08T00:09:06.668519543Z" level=info msg="TearDown network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" successfully" May 8 00:09:06.669104 containerd[1505]: time="2025-05-08T00:09:06.668557203Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" returns successfully" May 8 00:09:06.669104 containerd[1505]: time="2025-05-08T00:09:06.668759243Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:06.669104 containerd[1505]: time="2025-05-08T00:09:06.668839374Z" level=info msg="TearDown network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" successfully" May 8 00:09:06.669104 containerd[1505]: time="2025-05-08T00:09:06.668848291Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" returns successfully" May 8 00:09:06.671617 containerd[1505]: time="2025-05-08T00:09:06.669828323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:3,}" May 8 00:09:06.671617 containerd[1505]: time="2025-05-08T00:09:06.670093020Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:06.671617 containerd[1505]: time="2025-05-08T00:09:06.670349242Z" level=info msg="TearDown network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" successfully" May 8 00:09:06.671617 containerd[1505]: time="2025-05-08T00:09:06.670363258Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" returns successfully" May 8 00:09:06.673007 containerd[1505]: time="2025-05-08T00:09:06.672979376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:3,}" May 8 00:09:06.673931 kubelet[2608]: I0508 00:09:06.673622 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1" May 8 00:09:06.674509 containerd[1505]: time="2025-05-08T00:09:06.674480217Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\"" May 8 00:09:06.674841 containerd[1505]: time="2025-05-08T00:09:06.674815627Z" level=info msg="Ensure that sandbox 7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1 in task-service has been cleanup successfully" May 8 00:09:06.675107 containerd[1505]: time="2025-05-08T00:09:06.675089953Z" level=info msg="TearDown network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" successfully" May 8 00:09:06.675191 containerd[1505]: time="2025-05-08T00:09:06.675176706Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" returns successfully" May 8 00:09:06.676679 containerd[1505]: time="2025-05-08T00:09:06.676334222Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:06.676679 containerd[1505]: time="2025-05-08T00:09:06.676619237Z" level=info msg="TearDown network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" successfully" May 8 00:09:06.677066 containerd[1505]: time="2025-05-08T00:09:06.676637702Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" returns successfully" May 8 00:09:06.677756 kubelet[2608]: E0508 00:09:06.677729 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:06.681912 containerd[1505]: time="2025-05-08T00:09:06.678148191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:3,}" May 8 00:09:06.681912 containerd[1505]: time="2025-05-08T00:09:06.678416345Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" May 8 00:09:06.681912 containerd[1505]: time="2025-05-08T00:09:06.678520942Z" level=info msg="TearDown network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" successfully" May 8 00:09:06.681912 containerd[1505]: time="2025-05-08T00:09:06.678533536Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" returns successfully" May 8 00:09:06.681949 systemd[1]: run-netns-cni\x2da268ed26\x2d5291\x2df89c\x2d8fd0\x2d479ec5d278b2.mount: Deactivated successfully. May 8 00:09:06.698741 containerd[1505]: time="2025-05-08T00:09:06.698689141Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:06.699028 containerd[1505]: time="2025-05-08T00:09:06.699005085Z" level=info msg="TearDown network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" successfully" May 8 00:09:06.699131 containerd[1505]: time="2025-05-08T00:09:06.699110704Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" returns successfully" May 8 00:09:06.701819 kubelet[2608]: E0508 00:09:06.701547 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:06.702286 containerd[1505]: time="2025-05-08T00:09:06.702256226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:3,}" May 8 00:09:06.707100 containerd[1505]: time="2025-05-08T00:09:06.707053804Z" level=info msg="TearDown network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" successfully" May 8 00:09:06.707100 containerd[1505]: time="2025-05-08T00:09:06.707088008Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" returns successfully" May 8 00:09:06.714067 containerd[1505]: time="2025-05-08T00:09:06.713449024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:3,}" May 8 00:09:06.742616 kubelet[2608]: I0508 00:09:06.738843 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a" May 8 00:09:06.742934 containerd[1505]: time="2025-05-08T00:09:06.742898950Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\"" May 8 00:09:06.743805 containerd[1505]: time="2025-05-08T00:09:06.743784163Z" level=info msg="Ensure that sandbox fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a in task-service has been cleanup successfully" May 8 00:09:06.748354 containerd[1505]: time="2025-05-08T00:09:06.748004035Z" level=info msg="TearDown network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" successfully" May 8 00:09:06.748354 containerd[1505]: time="2025-05-08T00:09:06.748043810Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" returns successfully" May 8 00:09:06.753810 containerd[1505]: time="2025-05-08T00:09:06.753775273Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" May 8 00:09:06.755936 containerd[1505]: time="2025-05-08T00:09:06.755909124Z" level=info msg="TearDown network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" successfully" May 8 00:09:06.757923 containerd[1505]: time="2025-05-08T00:09:06.757895598Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" returns successfully" May 8 00:09:06.758816 containerd[1505]: time="2025-05-08T00:09:06.758791451Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:06.759237 containerd[1505]: time="2025-05-08T00:09:06.759213365Z" level=info msg="TearDown network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" successfully" May 8 00:09:06.761144 containerd[1505]: time="2025-05-08T00:09:06.760107336Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" returns successfully" May 8 00:09:06.761978 containerd[1505]: time="2025-05-08T00:09:06.761949067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:3,}" May 8 00:09:06.891072 containerd[1505]: time="2025-05-08T00:09:06.890994606Z" level=error msg="Failed to destroy network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.891888 containerd[1505]: time="2025-05-08T00:09:06.891750206Z" level=error msg="encountered an error cleaning up failed sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.892067 containerd[1505]: time="2025-05-08T00:09:06.891993704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.893654 kubelet[2608]: E0508 00:09:06.892339 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.893654 kubelet[2608]: E0508 00:09:06.892412 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:06.893654 kubelet[2608]: E0508 00:09:06.892442 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:06.893899 kubelet[2608]: E0508 00:09:06.892492 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" podUID="5f9ac193-cc58-42e7-b80a-b5e62d33d96a" May 8 00:09:06.917323 containerd[1505]: time="2025-05-08T00:09:06.916735940Z" level=error msg="Failed to destroy network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.921323 containerd[1505]: time="2025-05-08T00:09:06.921191305Z" level=error msg="encountered an error cleaning up failed sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.921323 containerd[1505]: time="2025-05-08T00:09:06.921275232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.921738 kubelet[2608]: E0508 00:09:06.921691 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.923783 kubelet[2608]: E0508 00:09:06.922749 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:06.923783 kubelet[2608]: E0508 00:09:06.922781 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:06.923783 kubelet[2608]: E0508 00:09:06.922828 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t4grq" podUID="08fdba5b-86cd-42fd-89b0-e9a10ac8f063" May 8 00:09:06.925306 containerd[1505]: time="2025-05-08T00:09:06.925224155Z" level=error msg="Failed to destroy network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.925866 containerd[1505]: time="2025-05-08T00:09:06.925820626Z" level=error msg="encountered an error cleaning up failed sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.926032 containerd[1505]: time="2025-05-08T00:09:06.925893563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.926190 kubelet[2608]: E0508 00:09:06.926120 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.926247 kubelet[2608]: E0508 00:09:06.926188 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:06.926247 kubelet[2608]: E0508 00:09:06.926207 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:06.926323 kubelet[2608]: E0508 00:09:06.926244 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:09:06.928598 containerd[1505]: time="2025-05-08T00:09:06.928553233Z" level=error msg="Failed to destroy network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.929051 containerd[1505]: time="2025-05-08T00:09:06.929004640Z" level=error msg="encountered an error cleaning up failed sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.929051 containerd[1505]: time="2025-05-08T00:09:06.929049295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.929240 kubelet[2608]: E0508 00:09:06.929203 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.929296 kubelet[2608]: E0508 00:09:06.929273 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:06.929321 kubelet[2608]: E0508 00:09:06.929297 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:06.929392 kubelet[2608]: E0508 00:09:06.929361 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-w95np" podUID="5bca3bbf-f7d4-44b0-9686-15081255aefa" May 8 00:09:06.947123 containerd[1505]: time="2025-05-08T00:09:06.946989957Z" level=error msg="Failed to destroy network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.947548 containerd[1505]: time="2025-05-08T00:09:06.947515013Z" level=error msg="encountered an error cleaning up failed sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.947638 containerd[1505]: time="2025-05-08T00:09:06.947610383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.948282 kubelet[2608]: E0508 00:09:06.948226 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.948675 kubelet[2608]: E0508 00:09:06.948655 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:06.948873 kubelet[2608]: E0508 00:09:06.948836 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:06.949059 kubelet[2608]: E0508 00:09:06.948925 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" podUID="b95f31cd-1dee-4344-bda0-406b9d8df019" May 8 00:09:06.951295 containerd[1505]: time="2025-05-08T00:09:06.951253300Z" level=error msg="Failed to destroy network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.951815 containerd[1505]: time="2025-05-08T00:09:06.951705921Z" level=error msg="encountered an error cleaning up failed sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.951815 containerd[1505]: time="2025-05-08T00:09:06.951768118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.952084 kubelet[2608]: E0508 00:09:06.952034 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:06.952140 kubelet[2608]: E0508 00:09:06.952108 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:06.952191 kubelet[2608]: E0508 00:09:06.952163 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:06.952233 kubelet[2608]: E0508 00:09:06.952208 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" podUID="d2fb4029-1146-49b1-8115-09528e7b165f" May 8 00:09:07.095116 systemd[1]: run-netns-cni\x2d22d944ab\x2d1564\x2df6b7\x2d83e3\x2d048e474ff25f.mount: Deactivated successfully. May 8 00:09:07.743404 kubelet[2608]: I0508 00:09:07.743354 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb" May 8 00:09:07.744713 containerd[1505]: time="2025-05-08T00:09:07.744215680Z" level=info msg="StopPodSandbox for \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\"" May 8 00:09:07.744713 containerd[1505]: time="2025-05-08T00:09:07.744446614Z" level=info msg="Ensure that sandbox e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb in task-service has been cleanup successfully" May 8 00:09:07.745617 containerd[1505]: time="2025-05-08T00:09:07.745198927Z" level=info msg="TearDown network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" successfully" May 8 00:09:07.745617 containerd[1505]: time="2025-05-08T00:09:07.745216411Z" level=info msg="StopPodSandbox for \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" returns successfully" May 8 00:09:07.747976 systemd[1]: run-netns-cni\x2d91e730b5\x2dcd01\x2de655\x2d857c\x2da6d223f1d186.mount: Deactivated successfully. May 8 00:09:07.748841 containerd[1505]: time="2025-05-08T00:09:07.748792902Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\"" May 8 00:09:07.749430 containerd[1505]: time="2025-05-08T00:09:07.749107273Z" level=info msg="TearDown network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" successfully" May 8 00:09:07.749554 kubelet[2608]: I0508 00:09:07.749521 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a" May 8 00:09:07.749773 containerd[1505]: time="2025-05-08T00:09:07.749746965Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" returns successfully" May 8 00:09:07.750356 containerd[1505]: time="2025-05-08T00:09:07.750329359Z" level=info msg="StopPodSandbox for \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\"" May 8 00:09:07.750675 containerd[1505]: time="2025-05-08T00:09:07.750333527Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" May 8 00:09:07.750675 containerd[1505]: time="2025-05-08T00:09:07.750666213Z" level=info msg="TearDown network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" successfully" May 8 00:09:07.750675 containerd[1505]: time="2025-05-08T00:09:07.750677885Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" returns successfully" May 8 00:09:07.750829 containerd[1505]: time="2025-05-08T00:09:07.750729962Z" level=info msg="Ensure that sandbox e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a in task-service has been cleanup successfully" May 8 00:09:07.751242 containerd[1505]: time="2025-05-08T00:09:07.750953352Z" level=info msg="TearDown network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" successfully" May 8 00:09:07.751242 containerd[1505]: time="2025-05-08T00:09:07.750971196Z" level=info msg="StopPodSandbox for \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" returns successfully" May 8 00:09:07.752262 containerd[1505]: time="2025-05-08T00:09:07.752236353Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\"" May 8 00:09:07.752348 containerd[1505]: time="2025-05-08T00:09:07.752319360Z" level=info msg="TearDown network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" successfully" May 8 00:09:07.752348 containerd[1505]: time="2025-05-08T00:09:07.752329609Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" returns successfully" May 8 00:09:07.752532 containerd[1505]: time="2025-05-08T00:09:07.752371037Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:07.752532 containerd[1505]: time="2025-05-08T00:09:07.752431029Z" level=info msg="TearDown network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" successfully" May 8 00:09:07.752532 containerd[1505]: time="2025-05-08T00:09:07.752439064Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" returns successfully" May 8 00:09:07.753225 containerd[1505]: time="2025-05-08T00:09:07.753111178Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" May 8 00:09:07.753225 containerd[1505]: time="2025-05-08T00:09:07.753216636Z" level=info msg="TearDown network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" successfully" May 8 00:09:07.753225 containerd[1505]: time="2025-05-08T00:09:07.753226755Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" returns successfully" May 8 00:09:07.753365 containerd[1505]: time="2025-05-08T00:09:07.753337744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:4,}" May 8 00:09:07.753820 containerd[1505]: time="2025-05-08T00:09:07.753692761Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:07.754038 containerd[1505]: time="2025-05-08T00:09:07.754001150Z" level=info msg="TearDown network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" successfully" May 8 00:09:07.754220 containerd[1505]: time="2025-05-08T00:09:07.754087412Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" returns successfully" May 8 00:09:07.754354 systemd[1]: run-netns-cni\x2dd292c146\x2da028\x2dd55c\x2d8bc9\x2d824cb01f48cf.mount: Deactivated successfully. May 8 00:09:07.754538 kubelet[2608]: I0508 00:09:07.754472 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660" May 8 00:09:07.756262 containerd[1505]: time="2025-05-08T00:09:07.755470762Z" level=info msg="StopPodSandbox for \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\"" May 8 00:09:07.756757 containerd[1505]: time="2025-05-08T00:09:07.756160989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:4,}" May 8 00:09:07.756757 containerd[1505]: time="2025-05-08T00:09:07.756734688Z" level=info msg="Ensure that sandbox 13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660 in task-service has been cleanup successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.756969749Z" level=info msg="TearDown network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.756990939Z" level=info msg="StopPodSandbox for \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" returns successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.757440064Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\"" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.757542426Z" level=info msg="TearDown network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.757553396Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" returns successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.758239196Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.758333763Z" level=info msg="TearDown network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.758344303Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" returns successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.758577261Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.758688810Z" level=info msg="TearDown network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" successfully" May 8 00:09:07.759451 containerd[1505]: time="2025-05-08T00:09:07.758702225Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" returns successfully" May 8 00:09:07.759758 kubelet[2608]: E0508 00:09:07.758863 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:07.759758 kubelet[2608]: I0508 00:09:07.759006 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3" May 8 00:09:07.758879 systemd[1]: run-netns-cni\x2d9e697222\x2d3872\x2df9e5\x2d0b00\x2d932b651456d5.mount: Deactivated successfully. May 8 00:09:07.760205 containerd[1505]: time="2025-05-08T00:09:07.759570066Z" level=info msg="StopPodSandbox for \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\"" May 8 00:09:07.760340 containerd[1505]: time="2025-05-08T00:09:07.760238162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:4,}" May 8 00:09:07.761943 containerd[1505]: time="2025-05-08T00:09:07.761716320Z" level=info msg="Ensure that sandbox 8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3 in task-service has been cleanup successfully" May 8 00:09:07.762513 containerd[1505]: time="2025-05-08T00:09:07.762487629Z" level=info msg="TearDown network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" successfully" May 8 00:09:07.762729 containerd[1505]: time="2025-05-08T00:09:07.762709316Z" level=info msg="StopPodSandbox for \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" returns successfully" May 8 00:09:07.763649 kubelet[2608]: I0508 00:09:07.763049 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452" May 8 00:09:07.763737 containerd[1505]: time="2025-05-08T00:09:07.763627431Z" level=info msg="StopPodSandbox for \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\"" May 8 00:09:07.763837 containerd[1505]: time="2025-05-08T00:09:07.763809424Z" level=info msg="Ensure that sandbox d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452 in task-service has been cleanup successfully" May 8 00:09:07.764394 containerd[1505]: time="2025-05-08T00:09:07.764220086Z" level=info msg="TearDown network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" successfully" May 8 00:09:07.764394 containerd[1505]: time="2025-05-08T00:09:07.764237418Z" level=info msg="StopPodSandbox for \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" returns successfully" May 8 00:09:07.764902 containerd[1505]: time="2025-05-08T00:09:07.764870498Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\"" May 8 00:09:07.764925 systemd[1]: run-netns-cni\x2dda04d2b6\x2d8ecd\x2d3d3c\x2d14a0\x2d3956ef7cd084.mount: Deactivated successfully. May 8 00:09:07.765076 containerd[1505]: time="2025-05-08T00:09:07.765017765Z" level=info msg="TearDown network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" successfully" May 8 00:09:07.765076 containerd[1505]: time="2025-05-08T00:09:07.765034877Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" returns successfully" May 8 00:09:07.765278 containerd[1505]: time="2025-05-08T00:09:07.764876169Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\"" May 8 00:09:07.765615 containerd[1505]: time="2025-05-08T00:09:07.765465546Z" level=info msg="TearDown network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" successfully" May 8 00:09:07.765615 containerd[1505]: time="2025-05-08T00:09:07.765548763Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" returns successfully" May 8 00:09:07.766424 containerd[1505]: time="2025-05-08T00:09:07.766390976Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" May 8 00:09:07.766510 containerd[1505]: time="2025-05-08T00:09:07.766482377Z" level=info msg="TearDown network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" successfully" May 8 00:09:07.766510 containerd[1505]: time="2025-05-08T00:09:07.766501633Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" returns successfully" May 8 00:09:07.766568 containerd[1505]: time="2025-05-08T00:09:07.766396566Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" May 8 00:09:07.766672 containerd[1505]: time="2025-05-08T00:09:07.766639672Z" level=info msg="TearDown network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" successfully" May 8 00:09:07.766672 containerd[1505]: time="2025-05-08T00:09:07.766665562Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" returns successfully" May 8 00:09:07.767313 containerd[1505]: time="2025-05-08T00:09:07.767272813Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:07.767407 containerd[1505]: time="2025-05-08T00:09:07.767387759Z" level=info msg="TearDown network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" successfully" May 8 00:09:07.767407 containerd[1505]: time="2025-05-08T00:09:07.767405161Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" returns successfully" May 8 00:09:07.767480 containerd[1505]: time="2025-05-08T00:09:07.767445317Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:07.767549 containerd[1505]: time="2025-05-08T00:09:07.767527732Z" level=info msg="TearDown network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" successfully" May 8 00:09:07.767612 containerd[1505]: time="2025-05-08T00:09:07.767546097Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" returns successfully" May 8 00:09:07.768113 containerd[1505]: time="2025-05-08T00:09:07.768073909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:4,}" May 8 00:09:07.768225 containerd[1505]: time="2025-05-08T00:09:07.768196168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:4,}" May 8 00:09:07.768452 kubelet[2608]: I0508 00:09:07.768422 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032" May 8 00:09:07.769074 containerd[1505]: time="2025-05-08T00:09:07.769051255Z" level=info msg="StopPodSandbox for \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\"" May 8 00:09:07.769289 containerd[1505]: time="2025-05-08T00:09:07.769264516Z" level=info msg="Ensure that sandbox 2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032 in task-service has been cleanup successfully" May 8 00:09:07.770497 containerd[1505]: time="2025-05-08T00:09:07.770476825Z" level=info msg="TearDown network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" successfully" May 8 00:09:07.770497 containerd[1505]: time="2025-05-08T00:09:07.770495350Z" level=info msg="StopPodSandbox for \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" returns successfully" May 8 00:09:07.770812 containerd[1505]: time="2025-05-08T00:09:07.770779243Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\"" May 8 00:09:07.770907 containerd[1505]: time="2025-05-08T00:09:07.770889640Z" level=info msg="TearDown network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" successfully" May 8 00:09:07.770944 containerd[1505]: time="2025-05-08T00:09:07.770906842Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" returns successfully" May 8 00:09:07.771310 containerd[1505]: time="2025-05-08T00:09:07.771272971Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" May 8 00:09:07.771386 containerd[1505]: time="2025-05-08T00:09:07.771371556Z" level=info msg="TearDown network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" successfully" May 8 00:09:07.771386 containerd[1505]: time="2025-05-08T00:09:07.771382807Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" returns successfully" May 8 00:09:07.771616 containerd[1505]: time="2025-05-08T00:09:07.771574267Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:07.771710 containerd[1505]: time="2025-05-08T00:09:07.771686418Z" level=info msg="TearDown network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" successfully" May 8 00:09:07.771756 containerd[1505]: time="2025-05-08T00:09:07.771706435Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" returns successfully" May 8 00:09:07.771969 kubelet[2608]: E0508 00:09:07.771948 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:07.772312 containerd[1505]: time="2025-05-08T00:09:07.772257581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:4,}" May 8 00:09:07.887029 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:52192.service - OpenSSH per-connection server daemon (10.0.0.1:52192). May 8 00:09:08.090714 systemd[1]: run-netns-cni\x2de04ac4e3\x2d22ca\x2d02cf\x2d04ca\x2de45a0ca1aca8.mount: Deactivated successfully. May 8 00:09:08.090878 systemd[1]: run-netns-cni\x2d57bd4b4a\x2d964c\x2d28df\x2d2346\x2d8dec53028271.mount: Deactivated successfully. May 8 00:09:08.091928 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 52192 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:08.095158 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:08.122764 systemd-logind[1492]: New session 8 of user core. May 8 00:09:08.126888 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:09:08.679000 sshd[4216]: Connection closed by 10.0.0.1 port 52192 May 8 00:09:08.679643 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 8 00:09:08.685269 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:52192.service: Deactivated successfully. May 8 00:09:08.688690 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:09:08.690424 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. May 8 00:09:08.692330 systemd-logind[1492]: Removed session 8. May 8 00:09:09.003257 containerd[1505]: time="2025-05-08T00:09:09.003188102Z" level=error msg="Failed to destroy network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.004015 containerd[1505]: time="2025-05-08T00:09:09.003697769Z" level=error msg="encountered an error cleaning up failed sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.004015 containerd[1505]: time="2025-05-08T00:09:09.003754626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.004121 kubelet[2608]: E0508 00:09:09.004042 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.004482 kubelet[2608]: E0508 00:09:09.004136 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:09.004482 kubelet[2608]: E0508 00:09:09.004163 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" May 8 00:09:09.004482 kubelet[2608]: E0508 00:09:09.004225 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-nrdtq_calico-apiserver(d2fb4029-1146-49b1-8115-09528e7b165f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" podUID="d2fb4029-1146-49b1-8115-09528e7b165f" May 8 00:09:09.085075 containerd[1505]: time="2025-05-08T00:09:09.084941481Z" level=error msg="Failed to destroy network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.085655 containerd[1505]: time="2025-05-08T00:09:09.085619095Z" level=error msg="encountered an error cleaning up failed sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.085716 containerd[1505]: time="2025-05-08T00:09:09.085683676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.086103 kubelet[2608]: E0508 00:09:09.085960 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.086179 kubelet[2608]: E0508 00:09:09.086164 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:09.086208 kubelet[2608]: E0508 00:09:09.086186 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" May 8 00:09:09.086278 kubelet[2608]: E0508 00:09:09.086249 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57974f499f-vv54k_calico-system(5f9ac193-cc58-42e7-b80a-b5e62d33d96a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" podUID="5f9ac193-cc58-42e7-b80a-b5e62d33d96a" May 8 00:09:09.100534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835-shm.mount: Deactivated successfully. May 8 00:09:09.100764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107-shm.mount: Deactivated successfully. May 8 00:09:09.175690 containerd[1505]: time="2025-05-08T00:09:09.175629209Z" level=error msg="Failed to destroy network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.177074 containerd[1505]: time="2025-05-08T00:09:09.177038417Z" level=error msg="encountered an error cleaning up failed sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.177147 containerd[1505]: time="2025-05-08T00:09:09.177113688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.178858 kubelet[2608]: E0508 00:09:09.178804 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.178922 kubelet[2608]: E0508 00:09:09.178881 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:09.178922 kubelet[2608]: E0508 00:09:09.178901 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-w95np" May 8 00:09:09.178986 kubelet[2608]: E0508 00:09:09.178955 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-w95np_kube-system(5bca3bbf-f7d4-44b0-9686-15081255aefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-w95np" podUID="5bca3bbf-f7d4-44b0-9686-15081255aefa" May 8 00:09:09.179543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762-shm.mount: Deactivated successfully. May 8 00:09:09.180023 containerd[1505]: time="2025-05-08T00:09:09.179612123Z" level=error msg="Failed to destroy network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.181095 containerd[1505]: time="2025-05-08T00:09:09.181011713Z" level=error msg="encountered an error cleaning up failed sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.181565 containerd[1505]: time="2025-05-08T00:09:09.181523193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.181991 kubelet[2608]: E0508 00:09:09.181946 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.182045 kubelet[2608]: E0508 00:09:09.182018 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:09.182045 kubelet[2608]: E0508 00:09:09.182038 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" May 8 00:09:09.182099 kubelet[2608]: E0508 00:09:09.182078 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8f4fc5f-xrvcn_calico-apiserver(b95f31cd-1dee-4344-bda0-406b9d8df019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" podUID="b95f31cd-1dee-4344-bda0-406b9d8df019" May 8 00:09:09.183765 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba-shm.mount: Deactivated successfully. May 8 00:09:09.207011 containerd[1505]: time="2025-05-08T00:09:09.206931354Z" level=error msg="Failed to destroy network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.210602 containerd[1505]: time="2025-05-08T00:09:09.207463123Z" level=error msg="encountered an error cleaning up failed sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.210602 containerd[1505]: time="2025-05-08T00:09:09.207532805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.210428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9-shm.mount: Deactivated successfully. May 8 00:09:09.210773 kubelet[2608]: E0508 00:09:09.208181 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.210773 kubelet[2608]: E0508 00:09:09.208483 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:09.210773 kubelet[2608]: E0508 00:09:09.208514 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d66hf" May 8 00:09:09.210873 kubelet[2608]: E0508 00:09:09.208563 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d66hf_calico-system(d6b5bca2-fe34-4d13-a1a5-1648d982e2b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d66hf" podUID="d6b5bca2-fe34-4d13-a1a5-1648d982e2b2" May 8 00:09:09.219844 containerd[1505]: time="2025-05-08T00:09:09.219680639Z" level=error msg="Failed to destroy network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.220546 containerd[1505]: time="2025-05-08T00:09:09.220493757Z" level=error msg="encountered an error cleaning up failed sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.220721 containerd[1505]: time="2025-05-08T00:09:09.220653597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.221120 kubelet[2608]: E0508 00:09:09.221059 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:09:09.222345 kubelet[2608]: E0508 00:09:09.221141 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:09.222345 kubelet[2608]: E0508 00:09:09.221164 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t4grq" May 8 00:09:09.222345 kubelet[2608]: E0508 00:09:09.221205 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t4grq_kube-system(08fdba5b-86cd-42fd-89b0-e9a10ac8f063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t4grq" podUID="08fdba5b-86cd-42fd-89b0-e9a10ac8f063" May 8 00:09:09.311946 containerd[1505]: time="2025-05-08T00:09:09.311685301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:09.314546 containerd[1505]: time="2025-05-08T00:09:09.314481696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:09:09.316059 containerd[1505]: time="2025-05-08T00:09:09.316004056Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:09.318172 containerd[1505]: time="2025-05-08T00:09:09.318132726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:09.318816 containerd[1505]: time="2025-05-08T00:09:09.318762439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.518280165s" May 8 00:09:09.318816 containerd[1505]: time="2025-05-08T00:09:09.318808846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:09:09.330034 containerd[1505]: time="2025-05-08T00:09:09.329982090Z" level=info msg="CreateContainer within sandbox \"3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:09:09.354297 containerd[1505]: time="2025-05-08T00:09:09.354223048Z" level=info msg="CreateContainer within sandbox \"3262cccf256e1eb13c01accd433d71b15ab912e4324f2c134b5615e8d3cd03d6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f494396cc520effe9a6c54c131c1d37687d438b98e56ae26c29a7bef472e3380\"" May 8 00:09:09.354956 containerd[1505]: time="2025-05-08T00:09:09.354905620Z" level=info msg="StartContainer for \"f494396cc520effe9a6c54c131c1d37687d438b98e56ae26c29a7bef472e3380\"" May 8 00:09:09.429824 systemd[1]: Started cri-containerd-f494396cc520effe9a6c54c131c1d37687d438b98e56ae26c29a7bef472e3380.scope - libcontainer container f494396cc520effe9a6c54c131c1d37687d438b98e56ae26c29a7bef472e3380. May 8 00:09:09.475724 containerd[1505]: time="2025-05-08T00:09:09.475673987Z" level=info msg="StartContainer for \"f494396cc520effe9a6c54c131c1d37687d438b98e56ae26c29a7bef472e3380\" returns successfully" May 8 00:09:09.550171 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:09:09.550394 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:09:09.776612 kubelet[2608]: I0508 00:09:09.776552 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107" May 8 00:09:09.779607 containerd[1505]: time="2025-05-08T00:09:09.778452531Z" level=info msg="StopPodSandbox for \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\"" May 8 00:09:09.779607 containerd[1505]: time="2025-05-08T00:09:09.778800655Z" level=info msg="Ensure that sandbox b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107 in task-service has been cleanup successfully" May 8 00:09:09.779607 containerd[1505]: time="2025-05-08T00:09:09.779162145Z" level=info msg="TearDown network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\" successfully" May 8 00:09:09.779607 containerd[1505]: time="2025-05-08T00:09:09.779194456Z" level=info msg="StopPodSandbox for \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\" returns successfully" May 8 00:09:09.779777 containerd[1505]: time="2025-05-08T00:09:09.779761572Z" level=info msg="StopPodSandbox for \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\"" May 8 00:09:09.779918 containerd[1505]: time="2025-05-08T00:09:09.779851160Z" level=info msg="TearDown network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" successfully" May 8 00:09:09.779918 containerd[1505]: time="2025-05-08T00:09:09.779877139Z" level=info msg="StopPodSandbox for \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" returns successfully" May 8 00:09:09.780530 containerd[1505]: time="2025-05-08T00:09:09.780482286Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\"" May 8 00:09:09.780731 containerd[1505]: time="2025-05-08T00:09:09.780663246Z" level=info msg="TearDown network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" successfully" May 8 00:09:09.780731 containerd[1505]: time="2025-05-08T00:09:09.780702249Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" returns successfully" May 8 00:09:09.781171 containerd[1505]: time="2025-05-08T00:09:09.781128109Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" May 8 00:09:09.781272 containerd[1505]: time="2025-05-08T00:09:09.781247583Z" level=info msg="TearDown network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" successfully" May 8 00:09:09.781272 containerd[1505]: time="2025-05-08T00:09:09.781265587Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" returns successfully" May 8 00:09:09.781998 kubelet[2608]: I0508 00:09:09.781226 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a" May 8 00:09:09.782184 containerd[1505]: time="2025-05-08T00:09:09.782085157Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:09.782255 containerd[1505]: time="2025-05-08T00:09:09.782190264Z" level=info msg="TearDown network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" successfully" May 8 00:09:09.782255 containerd[1505]: time="2025-05-08T00:09:09.782252512Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" returns successfully" May 8 00:09:09.782845 containerd[1505]: time="2025-05-08T00:09:09.782807444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:5,}" May 8 00:09:09.783403 containerd[1505]: time="2025-05-08T00:09:09.783338151Z" level=info msg="StopPodSandbox for \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\"" May 8 00:09:09.783614 containerd[1505]: time="2025-05-08T00:09:09.783562954Z" level=info msg="Ensure that sandbox c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a in task-service has been cleanup successfully" May 8 00:09:09.783993 containerd[1505]: time="2025-05-08T00:09:09.783957415Z" level=info msg="TearDown network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\" successfully" May 8 00:09:09.783993 containerd[1505]: time="2025-05-08T00:09:09.783980358Z" level=info msg="StopPodSandbox for \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\" returns successfully" May 8 00:09:09.784403 containerd[1505]: time="2025-05-08T00:09:09.784249784Z" level=info msg="StopPodSandbox for \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\"" May 8 00:09:09.784451 containerd[1505]: time="2025-05-08T00:09:09.784428390Z" level=info msg="TearDown network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" successfully" May 8 00:09:09.784451 containerd[1505]: time="2025-05-08T00:09:09.784442427Z" level=info msg="StopPodSandbox for \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" returns successfully" May 8 00:09:09.785773 containerd[1505]: time="2025-05-08T00:09:09.785683018Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\"" May 8 00:09:09.785840 containerd[1505]: time="2025-05-08T00:09:09.785810166Z" level=info msg="TearDown network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" successfully" May 8 00:09:09.785840 containerd[1505]: time="2025-05-08T00:09:09.785821397Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" returns successfully" May 8 00:09:09.786858 containerd[1505]: time="2025-05-08T00:09:09.786251807Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" May 8 00:09:09.786858 containerd[1505]: time="2025-05-08T00:09:09.786381229Z" level=info msg="TearDown network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" successfully" May 8 00:09:09.786858 containerd[1505]: time="2025-05-08T00:09:09.786397370Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" returns successfully" May 8 00:09:09.787067 containerd[1505]: time="2025-05-08T00:09:09.787049835Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:09.787411 containerd[1505]: time="2025-05-08T00:09:09.787215247Z" level=info msg="TearDown network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" successfully" May 8 00:09:09.787411 containerd[1505]: time="2025-05-08T00:09:09.787229423Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" returns successfully" May 8 00:09:09.787706 kubelet[2608]: E0508 00:09:09.787673 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:09.787846 kubelet[2608]: E0508 00:09:09.787820 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:09.788709 containerd[1505]: time="2025-05-08T00:09:09.788681060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:5,}" May 8 00:09:09.790627 kubelet[2608]: I0508 00:09:09.790566 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9" May 8 00:09:09.791079 containerd[1505]: time="2025-05-08T00:09:09.791035244Z" level=info msg="StopPodSandbox for \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\"" May 8 00:09:09.791923 containerd[1505]: time="2025-05-08T00:09:09.791726442Z" level=info msg="Ensure that sandbox 370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9 in task-service has been cleanup successfully" May 8 00:09:09.791923 containerd[1505]: time="2025-05-08T00:09:09.791983826Z" level=info msg="TearDown network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\" successfully" May 8 00:09:09.791923 containerd[1505]: time="2025-05-08T00:09:09.791999886Z" level=info msg="StopPodSandbox for \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\" returns successfully" May 8 00:09:09.792406 containerd[1505]: time="2025-05-08T00:09:09.792377305Z" level=info msg="StopPodSandbox for \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\"" May 8 00:09:09.792753 containerd[1505]: time="2025-05-08T00:09:09.792574576Z" level=info msg="TearDown network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" successfully" May 8 00:09:09.792753 containerd[1505]: time="2025-05-08T00:09:09.792667651Z" level=info msg="StopPodSandbox for \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" returns successfully" May 8 00:09:09.793205 containerd[1505]: time="2025-05-08T00:09:09.793014043Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\"" May 8 00:09:09.793205 containerd[1505]: time="2025-05-08T00:09:09.793125802Z" level=info msg="TearDown network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" successfully" May 8 00:09:09.793205 containerd[1505]: time="2025-05-08T00:09:09.793141061Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" returns successfully" May 8 00:09:09.793615 containerd[1505]: time="2025-05-08T00:09:09.793561521Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" May 8 00:09:09.793793 containerd[1505]: time="2025-05-08T00:09:09.793690764Z" level=info msg="TearDown network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" successfully" May 8 00:09:09.793793 containerd[1505]: time="2025-05-08T00:09:09.793717384Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" returns successfully" May 8 00:09:09.793890 kubelet[2608]: I0508 00:09:09.793846 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762" May 8 00:09:09.794466 containerd[1505]: time="2025-05-08T00:09:09.794416417Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:09.794533 containerd[1505]: time="2025-05-08T00:09:09.794523438Z" level=info msg="TearDown network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" successfully" May 8 00:09:09.794569 containerd[1505]: time="2025-05-08T00:09:09.794534398Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" returns successfully" May 8 00:09:09.794716 containerd[1505]: time="2025-05-08T00:09:09.794569435Z" level=info msg="StopPodSandbox for \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\"" May 8 00:09:09.794897 containerd[1505]: time="2025-05-08T00:09:09.794856574Z" level=info msg="Ensure that sandbox 4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762 in task-service has been cleanup successfully" May 8 00:09:09.795482 containerd[1505]: time="2025-05-08T00:09:09.795331817Z" level=info msg="TearDown network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\" successfully" May 8 00:09:09.795482 containerd[1505]: time="2025-05-08T00:09:09.795352406Z" level=info msg="StopPodSandbox for \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\" returns successfully" May 8 00:09:09.795482 containerd[1505]: time="2025-05-08T00:09:09.795378885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:5,}" May 8 00:09:09.795779 containerd[1505]: time="2025-05-08T00:09:09.795747698Z" level=info msg="StopPodSandbox for \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\"" May 8 00:09:09.795902 containerd[1505]: time="2025-05-08T00:09:09.795834672Z" level=info msg="TearDown network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" successfully" May 8 00:09:09.795902 containerd[1505]: time="2025-05-08T00:09:09.795851414Z" level=info msg="StopPodSandbox for \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" returns successfully" May 8 00:09:09.796325 containerd[1505]: time="2025-05-08T00:09:09.796120820Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\"" May 8 00:09:09.796325 containerd[1505]: time="2025-05-08T00:09:09.796243590Z" level=info msg="TearDown network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" successfully" May 8 00:09:09.796325 containerd[1505]: time="2025-05-08T00:09:09.796257797Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" returns successfully" May 8 00:09:09.797535 containerd[1505]: time="2025-05-08T00:09:09.797506363Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" May 8 00:09:09.797725 containerd[1505]: time="2025-05-08T00:09:09.797661925Z" level=info msg="TearDown network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" successfully" May 8 00:09:09.797725 containerd[1505]: time="2025-05-08T00:09:09.797685109Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" returns successfully" May 8 00:09:09.797961 containerd[1505]: time="2025-05-08T00:09:09.797929478Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:09.798058 containerd[1505]: time="2025-05-08T00:09:09.798039335Z" level=info msg="TearDown network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" successfully" May 8 00:09:09.798058 containerd[1505]: time="2025-05-08T00:09:09.798054794Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" returns successfully" May 8 00:09:09.798290 kubelet[2608]: I0508 00:09:09.798210 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835" May 8 00:09:09.798290 kubelet[2608]: E0508 00:09:09.798252 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:09.798570 containerd[1505]: time="2025-05-08T00:09:09.798531489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:5,}" May 8 00:09:09.798646 containerd[1505]: time="2025-05-08T00:09:09.798623723Z" level=info msg="StopPodSandbox for \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\"" May 8 00:09:09.798842 containerd[1505]: time="2025-05-08T00:09:09.798823367Z" level=info msg="Ensure that sandbox ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835 in task-service has been cleanup successfully" May 8 00:09:09.799123 containerd[1505]: time="2025-05-08T00:09:09.799030216Z" level=info msg="TearDown network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\" successfully" May 8 00:09:09.799123 containerd[1505]: time="2025-05-08T00:09:09.799052819Z" level=info msg="StopPodSandbox for \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\" returns successfully" May 8 00:09:09.799802 containerd[1505]: time="2025-05-08T00:09:09.799762642Z" level=info msg="StopPodSandbox for \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\"" May 8 00:09:09.799932 containerd[1505]: time="2025-05-08T00:09:09.799882227Z" level=info msg="TearDown network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" successfully" May 8 00:09:09.799932 containerd[1505]: time="2025-05-08T00:09:09.799896945Z" level=info msg="StopPodSandbox for \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" returns successfully" May 8 00:09:09.800734 containerd[1505]: time="2025-05-08T00:09:09.800457869Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\"" May 8 00:09:09.800734 containerd[1505]: time="2025-05-08T00:09:09.800576171Z" level=info msg="TearDown network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" successfully" May 8 00:09:09.800734 containerd[1505]: time="2025-05-08T00:09:09.800632807Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" returns successfully" May 8 00:09:09.801405 containerd[1505]: time="2025-05-08T00:09:09.801270054Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" May 8 00:09:09.801906 containerd[1505]: time="2025-05-08T00:09:09.801887184Z" level=info msg="TearDown network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" successfully" May 8 00:09:09.802026 containerd[1505]: time="2025-05-08T00:09:09.801981382Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" returns successfully" May 8 00:09:09.983676 containerd[1505]: time="2025-05-08T00:09:09.983626393Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:09.984605 containerd[1505]: time="2025-05-08T00:09:09.984278318Z" level=info msg="TearDown network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" successfully" May 8 00:09:09.984605 containerd[1505]: time="2025-05-08T00:09:09.984337268Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" returns successfully" May 8 00:09:09.985381 containerd[1505]: time="2025-05-08T00:09:09.985353016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:5,}" May 8 00:09:09.987788 kubelet[2608]: I0508 00:09:09.987722 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba" May 8 00:09:09.988668 containerd[1505]: time="2025-05-08T00:09:09.988628371Z" level=info msg="StopPodSandbox for \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\"" May 8 00:09:09.990574 containerd[1505]: time="2025-05-08T00:09:09.990354454Z" level=info msg="Ensure that sandbox e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba in task-service has been cleanup successfully" May 8 00:09:09.990574 containerd[1505]: time="2025-05-08T00:09:09.990748805Z" level=info msg="TearDown network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\" successfully" May 8 00:09:09.990574 containerd[1505]: time="2025-05-08T00:09:09.990766068Z" level=info msg="StopPodSandbox for \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\" returns successfully" May 8 00:09:09.991766 containerd[1505]: time="2025-05-08T00:09:09.991712155Z" level=info msg="StopPodSandbox for \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\"" May 8 00:09:09.991915 containerd[1505]: time="2025-05-08T00:09:09.991873698Z" level=info msg="TearDown network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" successfully" May 8 00:09:09.991915 containerd[1505]: time="2025-05-08T00:09:09.991892003Z" level=info msg="StopPodSandbox for \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" returns successfully" May 8 00:09:09.993498 containerd[1505]: time="2025-05-08T00:09:09.993197035Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\"" May 8 00:09:09.993498 containerd[1505]: time="2025-05-08T00:09:09.993378937Z" level=info msg="TearDown network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" successfully" May 8 00:09:09.993498 containerd[1505]: time="2025-05-08T00:09:09.993395558Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" returns successfully" May 8 00:09:09.993907 containerd[1505]: time="2025-05-08T00:09:09.993860392Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" May 8 00:09:09.994016 containerd[1505]: time="2025-05-08T00:09:09.993989845Z" level=info msg="TearDown network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" successfully" May 8 00:09:09.994016 containerd[1505]: time="2025-05-08T00:09:09.994011495Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" returns successfully" May 8 00:09:09.994519 containerd[1505]: time="2025-05-08T00:09:09.994316248Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:09.994519 containerd[1505]: time="2025-05-08T00:09:09.994430964Z" level=info msg="TearDown network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" successfully" May 8 00:09:09.994519 containerd[1505]: time="2025-05-08T00:09:09.994448196Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" returns successfully" May 8 00:09:09.995285 containerd[1505]: time="2025-05-08T00:09:09.995252377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:5,}" May 8 00:09:10.092770 systemd[1]: run-netns-cni\x2d7da5cbe5\x2d5c96\x2d3cf4\x2d88ea\x2d8f9e1af2cd7b.mount: Deactivated successfully. May 8 00:09:10.092924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a-shm.mount: Deactivated successfully. May 8 00:09:10.093079 systemd[1]: run-netns-cni\x2d96115e88\x2d247e\x2de128\x2d4b39\x2d06e0491020e6.mount: Deactivated successfully. May 8 00:09:10.093201 systemd[1]: run-netns-cni\x2dcac16384\x2d65ed\x2d1478\x2d0990\x2d5942c683cb63.mount: Deactivated successfully. May 8 00:09:10.093541 systemd[1]: run-netns-cni\x2da19b1c78\x2d4204\x2d22dd\x2d31e4\x2d2e45b2731aa6.mount: Deactivated successfully. May 8 00:09:10.093688 systemd[1]: run-netns-cni\x2daca49ca0\x2d9a1b\x2d63c4\x2d2798\x2d1ef14a844e3b.mount: Deactivated successfully. May 8 00:09:10.093805 systemd[1]: run-netns-cni\x2d43d9b1ba\x2d261d\x2d5849\x2d1ac5\x2dfd5cb4aa5ad6.mount: Deactivated successfully. May 8 00:09:10.093971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383718305.mount: Deactivated successfully. May 8 00:09:10.991673 kubelet[2608]: E0508 00:09:10.991470 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:11.087900 systemd-networkd[1423]: cali26cfaecdc60: Link UP May 8 00:09:11.088183 systemd-networkd[1423]: cali26cfaecdc60: Gained carrier May 8 00:09:11.255873 kubelet[2608]: I0508 00:09:11.255704 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mx9z2" podStartSLOduration=2.615449416 podStartE2EDuration="22.255675182s" podCreationTimestamp="2025-05-08 00:08:49 +0000 UTC" firstStartedPulling="2025-05-08 00:08:49.679329953 +0000 UTC m=+11.144157570" lastFinishedPulling="2025-05-08 00:09:09.319555719 +0000 UTC m=+30.784383336" observedRunningTime="2025-05-08 00:09:09.992524491 +0000 UTC m=+31.457352108" watchObservedRunningTime="2025-05-08 00:09:11.255675182 +0000 UTC m=+32.720502799" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.296 [INFO][4544] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.311 [INFO][4544] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--t4grq-eth0 coredns-6f6b679f8f- kube-system 08fdba5b-86cd-42fd-89b0-e9a10ac8f063 694 0 2025-05-08 00:08:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-t4grq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26cfaecdc60 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.311 [INFO][4544] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.418 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" HandleID="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Workload="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.816 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" HandleID="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Workload="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039fa80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-t4grq", "timestamp":"2025-05-08 00:09:10.418010673 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.816 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.816 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.816 [INFO][4558] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.820 [INFO][4558] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.829 [INFO][4558] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.837 [INFO][4558] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.840 [INFO][4558] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.842 [INFO][4558] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.843 [INFO][4558] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.845 [INFO][4558] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.916 [INFO][4558] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.984 [INFO][4558] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.984 [INFO][4558] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" host="localhost" May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.985 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:09:11.258614 containerd[1505]: 2025-05-08 00:09:10.985 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" HandleID="k8s-pod-network.2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Workload="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" May 8 00:09:11.259547 containerd[1505]: 2025-05-08 00:09:10.989 [INFO][4544] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t4grq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08fdba5b-86cd-42fd-89b0-e9a10ac8f063", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-t4grq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26cfaecdc60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.259547 containerd[1505]: 2025-05-08 00:09:10.989 [INFO][4544] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" May 8 00:09:11.259547 containerd[1505]: 2025-05-08 00:09:10.989 [INFO][4544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26cfaecdc60 ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" May 8 00:09:11.259547 containerd[1505]: 2025-05-08 00:09:11.085 [INFO][4544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" May 8 00:09:11.259547 containerd[1505]: 2025-05-08 00:09:11.086 [INFO][4544] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t4grq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08fdba5b-86cd-42fd-89b0-e9a10ac8f063", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d", Pod:"coredns-6f6b679f8f-t4grq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26cfaecdc60", MAC:"ea:94:37:b4:26:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.259547 containerd[1505]: 2025-05-08 00:09:11.255 [INFO][4544] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d" Namespace="kube-system" Pod="coredns-6f6b679f8f-t4grq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t4grq-eth0" May 8 00:09:11.332624 kernel: bpftool[4756]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:09:11.365761 containerd[1505]: time="2025-05-08T00:09:11.364816540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:09:11.365761 containerd[1505]: time="2025-05-08T00:09:11.365707834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:09:11.366068 containerd[1505]: time="2025-05-08T00:09:11.365742088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.366207 containerd[1505]: time="2025-05-08T00:09:11.366175162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.388734 systemd[1]: Started cri-containerd-2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d.scope - libcontainer container 2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d. May 8 00:09:11.409655 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:09:11.496376 containerd[1505]: time="2025-05-08T00:09:11.496179633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t4grq,Uid:08fdba5b-86cd-42fd-89b0-e9a10ac8f063,Namespace:kube-system,Attempt:5,} returns sandbox id \"2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d\"" May 8 00:09:11.498014 kubelet[2608]: E0508 00:09:11.497552 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:11.502789 containerd[1505]: time="2025-05-08T00:09:11.502686456Z" level=info msg="CreateContainer within sandbox \"2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:09:11.541552 containerd[1505]: time="2025-05-08T00:09:11.541431410Z" level=info msg="CreateContainer within sandbox \"2476b548ba3a2f40b37946552ae65683ea6911c0f41b8ebda7a3407c7d6aaf9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"698b3a6f7b02281260912d0c710f4bdf36c85b93fa43ea1afcb2e6c2198af8b6\"" May 8 00:09:11.547352 containerd[1505]: time="2025-05-08T00:09:11.547284706Z" level=info msg="StartContainer for \"698b3a6f7b02281260912d0c710f4bdf36c85b93fa43ea1afcb2e6c2198af8b6\"" May 8 00:09:11.618814 systemd[1]: Started cri-containerd-698b3a6f7b02281260912d0c710f4bdf36c85b93fa43ea1afcb2e6c2198af8b6.scope - libcontainer container 698b3a6f7b02281260912d0c710f4bdf36c85b93fa43ea1afcb2e6c2198af8b6. May 8 00:09:11.676174 systemd-networkd[1423]: cali105f45861a8: Link UP May 8 00:09:11.677667 systemd-networkd[1423]: cali105f45861a8: Gained carrier May 8 00:09:11.680927 containerd[1505]: time="2025-05-08T00:09:11.680872408Z" level=info msg="StartContainer for \"698b3a6f7b02281260912d0c710f4bdf36c85b93fa43ea1afcb2e6c2198af8b6\" returns successfully" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.353 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0 calico-apiserver-bc8f4fc5f- calico-apiserver d2fb4029-1146-49b1-8115-09528e7b165f 695 0 2025-05-08 00:08:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc8f4fc5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bc8f4fc5f-nrdtq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali105f45861a8 [] []}} ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.353 [INFO][4726] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.518 [INFO][4810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" HandleID="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Workload="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.542 [INFO][4810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" HandleID="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Workload="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003773d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bc8f4fc5f-nrdtq", "timestamp":"2025-05-08 00:09:11.518479023 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.543 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.543 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.543 [INFO][4810] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.550 [INFO][4810] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.636 [INFO][4810] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.649 [INFO][4810] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.651 [INFO][4810] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.653 [INFO][4810] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.653 [INFO][4810] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.655 [INFO][4810] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.662 [INFO][4810] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.668 [INFO][4810] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.668 [INFO][4810] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" host="localhost" May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.668 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:09:11.702361 containerd[1505]: 2025-05-08 00:09:11.669 [INFO][4810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" HandleID="k8s-pod-network.1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Workload="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" May 8 00:09:11.703144 containerd[1505]: 2025-05-08 00:09:11.673 [INFO][4726] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0", GenerateName:"calico-apiserver-bc8f4fc5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2fb4029-1146-49b1-8115-09528e7b165f", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8f4fc5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bc8f4fc5f-nrdtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali105f45861a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.703144 containerd[1505]: 2025-05-08 00:09:11.674 [INFO][4726] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" May 8 00:09:11.703144 containerd[1505]: 2025-05-08 00:09:11.674 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali105f45861a8 ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" May 8 00:09:11.703144 containerd[1505]: 2025-05-08 00:09:11.677 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" May 8 00:09:11.703144 containerd[1505]: 2025-05-08 00:09:11.678 [INFO][4726] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0", GenerateName:"calico-apiserver-bc8f4fc5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2fb4029-1146-49b1-8115-09528e7b165f", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8f4fc5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb", Pod:"calico-apiserver-bc8f4fc5f-nrdtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali105f45861a8", MAC:"d6:81:00:9f:c3:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.703144 containerd[1505]: 2025-05-08 00:09:11.698 [INFO][4726] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-nrdtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--nrdtq-eth0" May 8 00:09:11.725648 systemd-networkd[1423]: vxlan.calico: Link UP May 8 00:09:11.725662 systemd-networkd[1423]: vxlan.calico: Gained carrier May 8 00:09:11.786346 containerd[1505]: time="2025-05-08T00:09:11.786146571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:09:11.786563 systemd-networkd[1423]: cali8adf08ad435: Link UP May 8 00:09:11.788097 containerd[1505]: time="2025-05-08T00:09:11.787627062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:09:11.788097 containerd[1505]: time="2025-05-08T00:09:11.787652730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.788097 containerd[1505]: time="2025-05-08T00:09:11.787756115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.789244 systemd-networkd[1423]: cali8adf08ad435: Gained carrier May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.353 [INFO][4737] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--d66hf-eth0 csi-node-driver- calico-system d6b5bca2-fe34-4d13-a1a5-1648d982e2b2 595 0 2025-05-08 00:08:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-d66hf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8adf08ad435 [] []}} ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.353 [INFO][4737] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-eth0" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.533 [INFO][4817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" HandleID="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Workload="localhost-k8s-csi--node--driver--d66hf-eth0" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.636 [INFO][4817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" HandleID="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Workload="localhost-k8s-csi--node--driver--d66hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000604210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-d66hf", "timestamp":"2025-05-08 00:09:11.533353856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.636 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.668 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.669 [INFO][4817] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.672 [INFO][4817] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.736 [INFO][4817] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.744 [INFO][4817] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.747 [INFO][4817] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.750 [INFO][4817] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.750 [INFO][4817] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.754 [INFO][4817] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59 May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.760 [INFO][4817] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.771 [INFO][4817] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.771 [INFO][4817] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" host="localhost" May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.773 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:09:11.807705 containerd[1505]: 2025-05-08 00:09:11.773 [INFO][4817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" HandleID="k8s-pod-network.dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Workload="localhost-k8s-csi--node--driver--d66hf-eth0" May 8 00:09:11.808257 containerd[1505]: 2025-05-08 00:09:11.781 [INFO][4737] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d66hf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6b5bca2-fe34-4d13-a1a5-1648d982e2b2", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-d66hf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8adf08ad435", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.808257 containerd[1505]: 2025-05-08 00:09:11.781 [INFO][4737] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-eth0" May 8 00:09:11.808257 containerd[1505]: 2025-05-08 00:09:11.781 [INFO][4737] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8adf08ad435 ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-eth0" May 8 00:09:11.808257 containerd[1505]: 2025-05-08 00:09:11.788 [INFO][4737] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-eth0" May 8 00:09:11.808257 containerd[1505]: 2025-05-08 00:09:11.789 [INFO][4737] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d66hf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6b5bca2-fe34-4d13-a1a5-1648d982e2b2", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59", Pod:"csi-node-driver-d66hf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8adf08ad435", MAC:"82:d1:04:57:21:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.808257 containerd[1505]: 2025-05-08 00:09:11.798 [INFO][4737] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59" Namespace="calico-system" Pod="csi-node-driver-d66hf" WorkloadEndpoint="localhost-k8s-csi--node--driver--d66hf-eth0" May 8 00:09:11.820855 systemd[1]: Started cri-containerd-1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb.scope - libcontainer container 1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb. May 8 00:09:11.845674 containerd[1505]: time="2025-05-08T00:09:11.845419289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:09:11.845674 containerd[1505]: time="2025-05-08T00:09:11.845494279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:09:11.847607 containerd[1505]: time="2025-05-08T00:09:11.847458960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.848672 containerd[1505]: time="2025-05-08T00:09:11.847821551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.849456 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:09:11.871171 systemd[1]: Started cri-containerd-dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59.scope - libcontainer container dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59. May 8 00:09:11.893035 systemd-networkd[1423]: cali739aa9d4d69: Link UP May 8 00:09:11.893245 systemd-networkd[1423]: cali739aa9d4d69: Gained carrier May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.559 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0 calico-apiserver-bc8f4fc5f- calico-apiserver b95f31cd-1dee-4344-bda0-406b9d8df019 689 0 2025-05-08 00:08:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc8f4fc5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bc8f4fc5f-xrvcn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali739aa9d4d69 [] []}} ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.560 [INFO][4846] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.604 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" HandleID="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Workload="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.641 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" HandleID="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Workload="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bc8f4fc5f-xrvcn", "timestamp":"2025-05-08 00:09:11.603979641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.641 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.772 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.772 [INFO][4876] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.778 [INFO][4876] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.836 [INFO][4876] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.845 [INFO][4876] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.848 [INFO][4876] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.850 [INFO][4876] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.850 [INFO][4876] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.852 [INFO][4876] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0 May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.865 [INFO][4876] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.873 [INFO][4876] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.874 [INFO][4876] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" host="localhost" May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.874 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:09:11.910882 containerd[1505]: 2025-05-08 00:09:11.874 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" HandleID="k8s-pod-network.c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Workload="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" May 8 00:09:11.912371 containerd[1505]: 2025-05-08 00:09:11.888 [INFO][4846] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0", GenerateName:"calico-apiserver-bc8f4fc5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b95f31cd-1dee-4344-bda0-406b9d8df019", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8f4fc5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bc8f4fc5f-xrvcn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali739aa9d4d69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.912371 containerd[1505]: 2025-05-08 00:09:11.888 [INFO][4846] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" May 8 00:09:11.912371 containerd[1505]: 2025-05-08 00:09:11.888 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali739aa9d4d69 ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" May 8 00:09:11.912371 containerd[1505]: 2025-05-08 00:09:11.891 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" May 8 00:09:11.912371 containerd[1505]: 2025-05-08 00:09:11.893 [INFO][4846] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0", GenerateName:"calico-apiserver-bc8f4fc5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b95f31cd-1dee-4344-bda0-406b9d8df019", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8f4fc5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0", Pod:"calico-apiserver-bc8f4fc5f-xrvcn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali739aa9d4d69", MAC:"86:c4:f3:e5:4c:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:11.912371 containerd[1505]: 2025-05-08 00:09:11.906 [INFO][4846] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0" Namespace="calico-apiserver" Pod="calico-apiserver-bc8f4fc5f-xrvcn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc8f4fc5f--xrvcn-eth0" May 8 00:09:11.911576 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:09:11.914338 containerd[1505]: time="2025-05-08T00:09:11.914300756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-nrdtq,Uid:d2fb4029-1146-49b1-8115-09528e7b165f,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb\"" May 8 00:09:11.919900 containerd[1505]: time="2025-05-08T00:09:11.918248332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:09:11.942422 containerd[1505]: time="2025-05-08T00:09:11.942361700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d66hf,Uid:d6b5bca2-fe34-4d13-a1a5-1648d982e2b2,Namespace:calico-system,Attempt:5,} returns sandbox id \"dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59\"" May 8 00:09:11.965747 containerd[1505]: time="2025-05-08T00:09:11.965637465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:09:11.966003 containerd[1505]: time="2025-05-08T00:09:11.965759574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:09:11.966003 containerd[1505]: time="2025-05-08T00:09:11.965796463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.966003 containerd[1505]: time="2025-05-08T00:09:11.965929684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:11.989884 systemd[1]: Started cri-containerd-c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0.scope - libcontainer container c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0. May 8 00:09:11.994721 systemd-networkd[1423]: cali5fe90381177: Link UP May 8 00:09:11.996315 systemd-networkd[1423]: cali5fe90381177: Gained carrier May 8 00:09:12.010177 kubelet[2608]: E0508 00:09:12.009195 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:12.019434 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.466 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--w95np-eth0 coredns-6f6b679f8f- kube-system 5bca3bbf-f7d4-44b0-9686-15081255aefa 699 0 2025-05-08 00:08:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-w95np eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5fe90381177 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.468 [INFO][4786] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.560 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" HandleID="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Workload="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.646 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" HandleID="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Workload="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-w95np", "timestamp":"2025-05-08 00:09:11.558464358 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.647 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.874 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.874 [INFO][4838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.878 [INFO][4838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.938 [INFO][4838] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.947 [INFO][4838] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.949 [INFO][4838] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.956 [INFO][4838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.957 [INFO][4838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.962 [INFO][4838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76 May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.972 [INFO][4838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.980 [INFO][4838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.981 [INFO][4838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" host="localhost" May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.981 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:09:12.028124 containerd[1505]: 2025-05-08 00:09:11.981 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" HandleID="k8s-pod-network.875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Workload="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" May 8 00:09:12.028723 containerd[1505]: 2025-05-08 00:09:11.985 [INFO][4786] cni-plugin/k8s.go 386: Populated endpoint ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--w95np-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5bca3bbf-f7d4-44b0-9686-15081255aefa", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-w95np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fe90381177", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:12.028723 containerd[1505]: 2025-05-08 00:09:11.987 [INFO][4786] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" May 8 00:09:12.028723 containerd[1505]: 2025-05-08 00:09:11.988 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fe90381177 ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" May 8 00:09:12.028723 containerd[1505]: 2025-05-08 00:09:11.996 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" May 8 00:09:12.028723 containerd[1505]: 2025-05-08 00:09:11.996 [INFO][4786] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--w95np-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5bca3bbf-f7d4-44b0-9686-15081255aefa", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76", Pod:"coredns-6f6b679f8f-w95np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fe90381177", MAC:"36:0e:03:31:22:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:12.028723 containerd[1505]: 2025-05-08 00:09:12.015 [INFO][4786] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76" Namespace="kube-system" Pod="coredns-6f6b679f8f-w95np" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--w95np-eth0" May 8 00:09:12.051942 kubelet[2608]: I0508 00:09:12.050944 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t4grq" podStartSLOduration=29.050919754 podStartE2EDuration="29.050919754s" podCreationTimestamp="2025-05-08 00:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:09:12.030546716 +0000 UTC m=+33.495374333" watchObservedRunningTime="2025-05-08 00:09:12.050919754 +0000 UTC m=+33.515747371" May 8 00:09:12.060793 containerd[1505]: time="2025-05-08T00:09:12.060510870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8f4fc5f-xrvcn,Uid:b95f31cd-1dee-4344-bda0-406b9d8df019,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0\"" May 8 00:09:12.085669 containerd[1505]: time="2025-05-08T00:09:12.083415774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:09:12.085669 containerd[1505]: time="2025-05-08T00:09:12.083489382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:09:12.085669 containerd[1505]: time="2025-05-08T00:09:12.083503068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:12.092674 containerd[1505]: time="2025-05-08T00:09:12.083723892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:12.118270 systemd-networkd[1423]: calib6f7c71e297: Link UP May 8 00:09:12.119393 systemd-networkd[1423]: calib6f7c71e297: Gained carrier May 8 00:09:12.124977 systemd[1]: Started cri-containerd-875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76.scope - libcontainer container 875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76. May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.603 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0 calico-kube-controllers-57974f499f- calico-system 5f9ac193-cc58-42e7-b80a-b5e62d33d96a 698 0 2025-05-08 00:08:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57974f499f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57974f499f-vv54k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib6f7c71e297 [] []}} ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.603 [INFO][4832] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.651 [INFO][4901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" HandleID="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Workload="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.736 [INFO][4901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" HandleID="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Workload="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f56e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57974f499f-vv54k", "timestamp":"2025-05-08 00:09:11.651347884 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.736 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.981 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.981 [INFO][4901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:11.985 [INFO][4901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.040 [INFO][4901] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.054 [INFO][4901] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.059 [INFO][4901] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.069 [INFO][4901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.070 [INFO][4901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.072 [INFO][4901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.077 [INFO][4901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.086 [INFO][4901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.086 [INFO][4901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" host="localhost" May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.086 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:09:12.135627 containerd[1505]: 2025-05-08 00:09:12.086 [INFO][4901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" HandleID="k8s-pod-network.8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Workload="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" May 8 00:09:12.136707 containerd[1505]: 2025-05-08 00:09:12.097 [INFO][4832] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0", GenerateName:"calico-kube-controllers-57974f499f-", Namespace:"calico-system", SelfLink:"", UID:"5f9ac193-cc58-42e7-b80a-b5e62d33d96a", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57974f499f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57974f499f-vv54k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6f7c71e297", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:12.136707 containerd[1505]: 2025-05-08 00:09:12.099 [INFO][4832] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" May 8 00:09:12.136707 containerd[1505]: 2025-05-08 00:09:12.099 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6f7c71e297 ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" May 8 00:09:12.136707 containerd[1505]: 2025-05-08 00:09:12.120 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" May 8 00:09:12.136707 containerd[1505]: 2025-05-08 00:09:12.120 [INFO][4832] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0", GenerateName:"calico-kube-controllers-57974f499f-", Namespace:"calico-system", SelfLink:"", UID:"5f9ac193-cc58-42e7-b80a-b5e62d33d96a", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57974f499f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e", Pod:"calico-kube-controllers-57974f499f-vv54k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6f7c71e297", MAC:"36:2c:f0:d3:aa:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:09:12.136707 containerd[1505]: 2025-05-08 00:09:12.130 [INFO][4832] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e" Namespace="calico-system" Pod="calico-kube-controllers-57974f499f-vv54k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57974f499f--vv54k-eth0" May 8 00:09:12.144280 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:09:12.172607 containerd[1505]: time="2025-05-08T00:09:12.168996446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:09:12.172607 containerd[1505]: time="2025-05-08T00:09:12.169881839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:09:12.172607 containerd[1505]: time="2025-05-08T00:09:12.169901937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:12.172607 containerd[1505]: time="2025-05-08T00:09:12.170015270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:12.179093 containerd[1505]: time="2025-05-08T00:09:12.177774675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w95np,Uid:5bca3bbf-f7d4-44b0-9686-15081255aefa,Namespace:kube-system,Attempt:5,} returns sandbox id \"875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76\"" May 8 00:09:12.179699 kubelet[2608]: E0508 00:09:12.179346 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:12.184761 containerd[1505]: time="2025-05-08T00:09:12.184697619Z" level=info msg="CreateContainer within sandbox \"875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:09:12.200786 systemd[1]: Started cri-containerd-8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e.scope - libcontainer container 8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e. May 8 00:09:12.210197 containerd[1505]: time="2025-05-08T00:09:12.210108701Z" level=info msg="CreateContainer within sandbox \"875f28b5e2bcfee9934f581f1f08ca73a58430a65d21e14ea6894ad05047ee76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a94735ebbd749724ba1c66755a219905e8150861e81e36387d6664ef9d6e6107\"" May 8 00:09:12.211176 containerd[1505]: time="2025-05-08T00:09:12.211128436Z" level=info msg="StartContainer for \"a94735ebbd749724ba1c66755a219905e8150861e81e36387d6664ef9d6e6107\"" May 8 00:09:12.227503 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:09:12.248782 systemd[1]: Started cri-containerd-a94735ebbd749724ba1c66755a219905e8150861e81e36387d6664ef9d6e6107.scope - libcontainer container a94735ebbd749724ba1c66755a219905e8150861e81e36387d6664ef9d6e6107. May 8 00:09:12.270299 containerd[1505]: time="2025-05-08T00:09:12.270254186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57974f499f-vv54k,Uid:5f9ac193-cc58-42e7-b80a-b5e62d33d96a,Namespace:calico-system,Attempt:5,} returns sandbox id \"8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e\"" May 8 00:09:12.288853 containerd[1505]: time="2025-05-08T00:09:12.288812817Z" level=info msg="StartContainer for \"a94735ebbd749724ba1c66755a219905e8150861e81e36387d6664ef9d6e6107\" returns successfully" May 8 00:09:12.952774 systemd-networkd[1423]: cali739aa9d4d69: Gained IPv6LL May 8 00:09:12.955693 systemd-networkd[1423]: cali105f45861a8: Gained IPv6LL May 8 00:09:13.046034 kubelet[2608]: E0508 00:09:13.044547 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:13.053457 kubelet[2608]: E0508 00:09:13.053422 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:13.067941 kubelet[2608]: I0508 00:09:13.067646 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-w95np" podStartSLOduration=30.067621062 podStartE2EDuration="30.067621062s" podCreationTimestamp="2025-05-08 00:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:09:13.057632802 +0000 UTC m=+34.522460439" watchObservedRunningTime="2025-05-08 00:09:13.067621062 +0000 UTC m=+34.532448679" May 8 00:09:13.081878 systemd-networkd[1423]: cali26cfaecdc60: Gained IPv6LL May 8 00:09:13.464829 systemd-networkd[1423]: cali8adf08ad435: Gained IPv6LL May 8 00:09:13.528768 systemd-networkd[1423]: cali5fe90381177: Gained IPv6LL May 8 00:09:13.699423 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:52206.service - OpenSSH per-connection server daemon (10.0.0.1:52206). May 8 00:09:13.721753 systemd-networkd[1423]: calib6f7c71e297: Gained IPv6LL May 8 00:09:13.748649 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 52206 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:13.750830 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:13.756557 systemd-logind[1492]: New session 9 of user core. May 8 00:09:13.763768 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:09:13.785713 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL May 8 00:09:13.912250 sshd[5331]: Connection closed by 10.0.0.1 port 52206 May 8 00:09:13.912891 sshd-session[5329]: pam_unix(sshd:session): session closed for user core May 8 00:09:13.917657 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:52206.service: Deactivated successfully. May 8 00:09:13.919885 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:09:13.921360 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. May 8 00:09:13.922470 systemd-logind[1492]: Removed session 9. May 8 00:09:14.055663 kubelet[2608]: E0508 00:09:14.055631 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:14.056157 kubelet[2608]: E0508 00:09:14.055631 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:14.175437 containerd[1505]: time="2025-05-08T00:09:14.175377946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:14.176241 containerd[1505]: time="2025-05-08T00:09:14.176208506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:09:14.177378 containerd[1505]: time="2025-05-08T00:09:14.177326786Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:14.180497 containerd[1505]: time="2025-05-08T00:09:14.180458347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:14.180992 containerd[1505]: time="2025-05-08T00:09:14.180955631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.262670961s" May 8 00:09:14.181040 containerd[1505]: time="2025-05-08T00:09:14.180993473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:09:14.182450 containerd[1505]: time="2025-05-08T00:09:14.182425402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:09:14.183292 containerd[1505]: time="2025-05-08T00:09:14.183267503Z" level=info msg="CreateContainer within sandbox \"1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:09:14.406504 containerd[1505]: time="2025-05-08T00:09:14.406384860Z" level=info msg="CreateContainer within sandbox \"1b886daf757feab41d4b7dedf2a0bce4e49d25820a98c0e2267552a6700fc9eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a729648976893f5d8008503bf7df570ed09d4f3bc23c4d3cf232e38d80f4dea6\"" May 8 00:09:14.407347 containerd[1505]: time="2025-05-08T00:09:14.407312181Z" level=info msg="StartContainer for \"a729648976893f5d8008503bf7df570ed09d4f3bc23c4d3cf232e38d80f4dea6\"" May 8 00:09:14.448762 systemd[1]: Started cri-containerd-a729648976893f5d8008503bf7df570ed09d4f3bc23c4d3cf232e38d80f4dea6.scope - libcontainer container a729648976893f5d8008503bf7df570ed09d4f3bc23c4d3cf232e38d80f4dea6. May 8 00:09:14.492152 containerd[1505]: time="2025-05-08T00:09:14.492107512Z" level=info msg="StartContainer for \"a729648976893f5d8008503bf7df570ed09d4f3bc23c4d3cf232e38d80f4dea6\" returns successfully" May 8 00:09:15.067499 kubelet[2608]: E0508 00:09:15.067442 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:15.080485 kubelet[2608]: I0508 00:09:15.080401 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-nrdtq" podStartSLOduration=23.816127096 podStartE2EDuration="26.080377964s" podCreationTimestamp="2025-05-08 00:08:49 +0000 UTC" firstStartedPulling="2025-05-08 00:09:11.917540803 +0000 UTC m=+33.382368420" lastFinishedPulling="2025-05-08 00:09:14.181791671 +0000 UTC m=+35.646619288" observedRunningTime="2025-05-08 00:09:15.079751899 +0000 UTC m=+36.544579516" watchObservedRunningTime="2025-05-08 00:09:15.080377964 +0000 UTC m=+36.545205581" May 8 00:09:15.822519 containerd[1505]: time="2025-05-08T00:09:15.822450408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:15.823407 containerd[1505]: time="2025-05-08T00:09:15.823365296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:09:15.824667 containerd[1505]: time="2025-05-08T00:09:15.824613320Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:15.827045 containerd[1505]: time="2025-05-08T00:09:15.827008878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:15.827638 containerd[1505]: time="2025-05-08T00:09:15.827605649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.645133779s" May 8 00:09:15.827638 containerd[1505]: time="2025-05-08T00:09:15.827636417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:09:15.829865 containerd[1505]: time="2025-05-08T00:09:15.829824516Z" level=info msg="CreateContainer within sandbox \"dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:09:15.830034 containerd[1505]: time="2025-05-08T00:09:15.829962335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:09:15.855949 containerd[1505]: time="2025-05-08T00:09:15.855864667Z" level=info msg="CreateContainer within sandbox \"dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ff25d529f29275786003b1c62cf6729fe8d278d652939537df964834bad4a0ce\"" May 8 00:09:15.856474 containerd[1505]: time="2025-05-08T00:09:15.856439266Z" level=info msg="StartContainer for \"ff25d529f29275786003b1c62cf6729fe8d278d652939537df964834bad4a0ce\"" May 8 00:09:15.890744 systemd[1]: Started cri-containerd-ff25d529f29275786003b1c62cf6729fe8d278d652939537df964834bad4a0ce.scope - libcontainer container ff25d529f29275786003b1c62cf6729fe8d278d652939537df964834bad4a0ce. May 8 00:09:15.972082 containerd[1505]: time="2025-05-08T00:09:15.972018674Z" level=info msg="StartContainer for \"ff25d529f29275786003b1c62cf6729fe8d278d652939537df964834bad4a0ce\" returns successfully" May 8 00:09:16.072815 kubelet[2608]: I0508 00:09:16.072684 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:09:16.750943 containerd[1505]: time="2025-05-08T00:09:16.750861106Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:16.786282 containerd[1505]: time="2025-05-08T00:09:16.786187581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:09:16.788625 containerd[1505]: time="2025-05-08T00:09:16.788556549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 958.571382ms" May 8 00:09:16.788625 containerd[1505]: time="2025-05-08T00:09:16.788615770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:09:16.789697 containerd[1505]: time="2025-05-08T00:09:16.789652728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:09:16.790763 containerd[1505]: time="2025-05-08T00:09:16.790720363Z" level=info msg="CreateContainer within sandbox \"c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:09:17.340067 containerd[1505]: time="2025-05-08T00:09:17.340003187Z" level=info msg="CreateContainer within sandbox \"c9dfda9283a4cefb1cbf0e851185dba647fe36398f6bb59e84822338093e56e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"26940cf1827f8fe83bbf53907f10048b6d71760dd8fe488875662ae85f5d5c73\"" May 8 00:09:17.340744 containerd[1505]: time="2025-05-08T00:09:17.340710696Z" level=info msg="StartContainer for \"26940cf1827f8fe83bbf53907f10048b6d71760dd8fe488875662ae85f5d5c73\"" May 8 00:09:17.379831 systemd[1]: Started cri-containerd-26940cf1827f8fe83bbf53907f10048b6d71760dd8fe488875662ae85f5d5c73.scope - libcontainer container 26940cf1827f8fe83bbf53907f10048b6d71760dd8fe488875662ae85f5d5c73. May 8 00:09:17.426556 containerd[1505]: time="2025-05-08T00:09:17.426505448Z" level=info msg="StartContainer for \"26940cf1827f8fe83bbf53907f10048b6d71760dd8fe488875662ae85f5d5c73\" returns successfully" May 8 00:09:18.091708 kubelet[2608]: I0508 00:09:18.091630 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bc8f4fc5f-xrvcn" podStartSLOduration=24.366489051 podStartE2EDuration="29.091611469s" podCreationTimestamp="2025-05-08 00:08:49 +0000 UTC" firstStartedPulling="2025-05-08 00:09:12.06434418 +0000 UTC m=+33.529171797" lastFinishedPulling="2025-05-08 00:09:16.789466598 +0000 UTC m=+38.254294215" observedRunningTime="2025-05-08 00:09:18.090603927 +0000 UTC m=+39.555431534" watchObservedRunningTime="2025-05-08 00:09:18.091611469 +0000 UTC m=+39.556439086" May 8 00:09:18.941297 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:60426.service - OpenSSH per-connection server daemon (10.0.0.1:60426). May 8 00:09:19.022320 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 60426 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:19.024155 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:19.029925 systemd-logind[1492]: New session 10 of user core. May 8 00:09:19.038752 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:09:19.082001 kubelet[2608]: I0508 00:09:19.081946 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:09:19.196228 sshd[5489]: Connection closed by 10.0.0.1 port 60426 May 8 00:09:19.196565 sshd-session[5487]: pam_unix(sshd:session): session closed for user core May 8 00:09:19.201844 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:60426.service: Deactivated successfully. May 8 00:09:19.205018 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:09:19.205938 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. May 8 00:09:19.207038 systemd-logind[1492]: Removed session 10. May 8 00:09:19.272708 containerd[1505]: time="2025-05-08T00:09:19.272620459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:19.273868 containerd[1505]: time="2025-05-08T00:09:19.273796618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:09:19.275452 containerd[1505]: time="2025-05-08T00:09:19.275406871Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:19.278123 containerd[1505]: time="2025-05-08T00:09:19.278080841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:19.278729 containerd[1505]: time="2025-05-08T00:09:19.278694073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.489000348s" May 8 00:09:19.278729 containerd[1505]: time="2025-05-08T00:09:19.278721083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:09:19.279713 containerd[1505]: time="2025-05-08T00:09:19.279687407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:09:19.288439 containerd[1505]: time="2025-05-08T00:09:19.288395826Z" level=info msg="CreateContainer within sandbox \"8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:09:19.306445 containerd[1505]: time="2025-05-08T00:09:19.306387515Z" level=info msg="CreateContainer within sandbox \"8186ab8174494e27d2762d06726be6c338012b82f59ee9793142ee417ce1d59e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad57b25d45121ffa31ca12215e9e68880c2cdd11bb33eaf52f8042d05490c752\"" May 8 00:09:19.307194 containerd[1505]: time="2025-05-08T00:09:19.307049407Z" level=info msg="StartContainer for \"ad57b25d45121ffa31ca12215e9e68880c2cdd11bb33eaf52f8042d05490c752\"" May 8 00:09:19.343834 systemd[1]: Started cri-containerd-ad57b25d45121ffa31ca12215e9e68880c2cdd11bb33eaf52f8042d05490c752.scope - libcontainer container ad57b25d45121ffa31ca12215e9e68880c2cdd11bb33eaf52f8042d05490c752. May 8 00:09:19.396696 containerd[1505]: time="2025-05-08T00:09:19.396520982Z" level=info msg="StartContainer for \"ad57b25d45121ffa31ca12215e9e68880c2cdd11bb33eaf52f8042d05490c752\" returns successfully" May 8 00:09:20.099170 kubelet[2608]: I0508 00:09:20.099081 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57974f499f-vv54k" podStartSLOduration=24.092575403 podStartE2EDuration="31.099063221s" podCreationTimestamp="2025-05-08 00:08:49 +0000 UTC" firstStartedPulling="2025-05-08 00:09:12.273016756 +0000 UTC m=+33.737844373" lastFinishedPulling="2025-05-08 00:09:19.279504574 +0000 UTC m=+40.744332191" observedRunningTime="2025-05-08 00:09:20.098601023 +0000 UTC m=+41.563428640" watchObservedRunningTime="2025-05-08 00:09:20.099063221 +0000 UTC m=+41.563890838" May 8 00:09:20.656034 kubelet[2608]: I0508 00:09:20.655982 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:09:21.090736 kubelet[2608]: I0508 00:09:21.090692 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:09:21.159289 containerd[1505]: time="2025-05-08T00:09:21.159229679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:21.167440 containerd[1505]: time="2025-05-08T00:09:21.167391752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:09:21.222310 containerd[1505]: time="2025-05-08T00:09:21.222267184Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:21.319929 containerd[1505]: time="2025-05-08T00:09:21.319864644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:09:21.320782 containerd[1505]: time="2025-05-08T00:09:21.320751799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.041034877s" May 8 00:09:21.320839 containerd[1505]: time="2025-05-08T00:09:21.320792316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:09:21.322618 containerd[1505]: time="2025-05-08T00:09:21.322567127Z" level=info msg="CreateContainer within sandbox \"dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:09:21.823996 containerd[1505]: time="2025-05-08T00:09:21.823929647Z" level=info msg="CreateContainer within sandbox \"dcbf4d87acac97596e9eff071cefe0ff04ee8f8128b14165e0f0d226e39c6c59\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c73af2227e5bdf01037cdea8a5a1c726c3acc6c90316a625ee57fb106c9c8014\"" May 8 00:09:21.824708 containerd[1505]: time="2025-05-08T00:09:21.824652054Z" level=info msg="StartContainer for \"c73af2227e5bdf01037cdea8a5a1c726c3acc6c90316a625ee57fb106c9c8014\"" May 8 00:09:21.863861 systemd[1]: Started cri-containerd-c73af2227e5bdf01037cdea8a5a1c726c3acc6c90316a625ee57fb106c9c8014.scope - libcontainer container c73af2227e5bdf01037cdea8a5a1c726c3acc6c90316a625ee57fb106c9c8014. May 8 00:09:21.906281 containerd[1505]: time="2025-05-08T00:09:21.906225613Z" level=info msg="StartContainer for \"c73af2227e5bdf01037cdea8a5a1c726c3acc6c90316a625ee57fb106c9c8014\" returns successfully" May 8 00:09:22.111921 kubelet[2608]: I0508 00:09:22.111713 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d66hf" podStartSLOduration=23.73416013 podStartE2EDuration="33.111690393s" podCreationTimestamp="2025-05-08 00:08:49 +0000 UTC" firstStartedPulling="2025-05-08 00:09:11.943940356 +0000 UTC m=+33.408767973" lastFinishedPulling="2025-05-08 00:09:21.321470618 +0000 UTC m=+42.786298236" observedRunningTime="2025-05-08 00:09:22.10956862 +0000 UTC m=+43.574396237" watchObservedRunningTime="2025-05-08 00:09:22.111690393 +0000 UTC m=+43.576518010" May 8 00:09:22.715661 kubelet[2608]: I0508 00:09:22.715607 2608 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:09:22.715661 kubelet[2608]: I0508 00:09:22.715677 2608 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:09:24.217239 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:60430.service - OpenSSH per-connection server daemon (10.0.0.1:60430). May 8 00:09:24.272409 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 60430 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:24.274207 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:24.279084 systemd-logind[1492]: New session 11 of user core. May 8 00:09:24.285765 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:09:24.421504 sshd[5594]: Connection closed by 10.0.0.1 port 60430 May 8 00:09:24.421935 sshd-session[5592]: pam_unix(sshd:session): session closed for user core May 8 00:09:24.433988 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:60430.service: Deactivated successfully. May 8 00:09:24.436133 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:09:24.439109 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. May 8 00:09:24.443882 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:60444.service - OpenSSH per-connection server daemon (10.0.0.1:60444). May 8 00:09:24.445038 systemd-logind[1492]: Removed session 11. May 8 00:09:24.483354 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 60444 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:24.485343 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:24.491524 systemd-logind[1492]: New session 12 of user core. May 8 00:09:24.502764 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:09:24.673819 sshd[5610]: Connection closed by 10.0.0.1 port 60444 May 8 00:09:24.674360 sshd-session[5607]: pam_unix(sshd:session): session closed for user core May 8 00:09:24.684905 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:60444.service: Deactivated successfully. May 8 00:09:24.687632 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:09:24.690781 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. May 8 00:09:24.701105 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:60446.service - OpenSSH per-connection server daemon (10.0.0.1:60446). May 8 00:09:24.705142 systemd-logind[1492]: Removed session 12. May 8 00:09:24.750616 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 60446 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:24.751667 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:24.759137 systemd-logind[1492]: New session 13 of user core. May 8 00:09:24.761805 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:09:24.880989 sshd[5623]: Connection closed by 10.0.0.1 port 60446 May 8 00:09:24.881370 sshd-session[5620]: pam_unix(sshd:session): session closed for user core May 8 00:09:24.885391 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:60446.service: Deactivated successfully. May 8 00:09:24.887881 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:09:24.888676 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. May 8 00:09:24.889856 systemd-logind[1492]: Removed session 13. May 8 00:09:29.894262 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:35558.service - OpenSSH per-connection server daemon (10.0.0.1:35558). May 8 00:09:29.939133 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 35558 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:29.940817 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:29.945116 systemd-logind[1492]: New session 14 of user core. May 8 00:09:29.960736 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:09:30.095391 sshd[5639]: Connection closed by 10.0.0.1 port 35558 May 8 00:09:30.095937 sshd-session[5637]: pam_unix(sshd:session): session closed for user core May 8 00:09:30.101063 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:35558.service: Deactivated successfully. May 8 00:09:30.103476 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:09:30.104368 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. May 8 00:09:30.105383 systemd-logind[1492]: Removed session 14. May 8 00:09:31.164111 kubelet[2608]: I0508 00:09:31.164061 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:09:35.110005 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:49726.service - OpenSSH per-connection server daemon (10.0.0.1:49726). May 8 00:09:35.163648 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 49726 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:35.165704 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:35.171761 systemd-logind[1492]: New session 15 of user core. May 8 00:09:35.182920 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:09:35.311369 sshd[5727]: Connection closed by 10.0.0.1 port 49726 May 8 00:09:35.311816 sshd-session[5725]: pam_unix(sshd:session): session closed for user core May 8 00:09:35.316307 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:49726.service: Deactivated successfully. May 8 00:09:35.318779 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:09:35.319499 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. May 8 00:09:35.320532 systemd-logind[1492]: Removed session 15. May 8 00:09:38.632078 containerd[1505]: time="2025-05-08T00:09:38.632027945Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:38.632562 containerd[1505]: time="2025-05-08T00:09:38.632176902Z" level=info msg="TearDown network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" successfully" May 8 00:09:38.632562 containerd[1505]: time="2025-05-08T00:09:38.632231237Z" level=info msg="StopPodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" returns successfully" May 8 00:09:38.640228 containerd[1505]: time="2025-05-08T00:09:38.640179762Z" level=info msg="RemovePodSandbox for \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:38.653467 containerd[1505]: time="2025-05-08T00:09:38.653397624Z" level=info msg="Forcibly stopping sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\"" May 8 00:09:38.653642 containerd[1505]: time="2025-05-08T00:09:38.653569666Z" level=info msg="TearDown network for sandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" successfully" May 8 00:09:38.666160 containerd[1505]: time="2025-05-08T00:09:38.666093859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.666408 containerd[1505]: time="2025-05-08T00:09:38.666202399Z" level=info msg="RemovePodSandbox \"9c2441adda06194cc20faa84047609addf6d630a15a5e0244fd7475e04d75cce\" returns successfully" May 8 00:09:38.666893 containerd[1505]: time="2025-05-08T00:09:38.666863433Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" May 8 00:09:38.667018 containerd[1505]: time="2025-05-08T00:09:38.666994947Z" level=info msg="TearDown network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" successfully" May 8 00:09:38.667065 containerd[1505]: time="2025-05-08T00:09:38.667017070Z" level=info msg="StopPodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" returns successfully" May 8 00:09:38.667614 containerd[1505]: time="2025-05-08T00:09:38.667430017Z" level=info msg="RemovePodSandbox for \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" May 8 00:09:38.667614 containerd[1505]: time="2025-05-08T00:09:38.667467069Z" level=info msg="Forcibly stopping sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\"" May 8 00:09:38.667740 containerd[1505]: time="2025-05-08T00:09:38.667557694Z" level=info msg="TearDown network for sandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" successfully" May 8 00:09:38.672265 containerd[1505]: time="2025-05-08T00:09:38.672222383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.672331 containerd[1505]: time="2025-05-08T00:09:38.672274944Z" level=info msg="RemovePodSandbox \"1ba710607968908ebb58526d0d4f5246298a90d1ced3694b2a7eb4c7fdfd43ba\" returns successfully" May 8 00:09:38.672639 containerd[1505]: time="2025-05-08T00:09:38.672604751Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\"" May 8 00:09:38.672795 containerd[1505]: time="2025-05-08T00:09:38.672731795Z" level=info msg="TearDown network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" successfully" May 8 00:09:38.672795 containerd[1505]: time="2025-05-08T00:09:38.672783926Z" level=info msg="StopPodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" returns successfully" May 8 00:09:38.673123 containerd[1505]: time="2025-05-08T00:09:38.673072664Z" level=info msg="RemovePodSandbox for \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\"" May 8 00:09:38.673123 containerd[1505]: time="2025-05-08T00:09:38.673100497Z" level=info msg="Forcibly stopping sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\"" May 8 00:09:38.673243 containerd[1505]: time="2025-05-08T00:09:38.673201292Z" level=info msg="TearDown network for sandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" successfully" May 8 00:09:38.677942 containerd[1505]: time="2025-05-08T00:09:38.677899576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.678002 containerd[1505]: time="2025-05-08T00:09:38.677953600Z" level=info msg="RemovePodSandbox \"fdd6da4d20ec4df27f055121036bf7c97190962217a5cdf5866291e449fb9d3a\" returns successfully" May 8 00:09:38.678351 containerd[1505]: time="2025-05-08T00:09:38.678310719Z" level=info msg="StopPodSandbox for \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\"" May 8 00:09:38.678443 containerd[1505]: time="2025-05-08T00:09:38.678419619Z" level=info msg="TearDown network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" successfully" May 8 00:09:38.678443 containerd[1505]: time="2025-05-08T00:09:38.678438766Z" level=info msg="StopPodSandbox for \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" returns successfully" May 8 00:09:38.678871 containerd[1505]: time="2025-05-08T00:09:38.678816655Z" level=info msg="RemovePodSandbox for \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\"" May 8 00:09:38.678871 containerd[1505]: time="2025-05-08T00:09:38.678845722Z" level=info msg="Forcibly stopping sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\"" May 8 00:09:38.678964 containerd[1505]: time="2025-05-08T00:09:38.678930134Z" level=info msg="TearDown network for sandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" successfully" May 8 00:09:38.683831 containerd[1505]: time="2025-05-08T00:09:38.683781284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.683831 containerd[1505]: time="2025-05-08T00:09:38.683820770Z" level=info msg="RemovePodSandbox \"e0740c57053fe83f05b37b28257f0d6e54fe96268696d30bb86bfa3886ae20cb\" returns successfully" May 8 00:09:38.684147 containerd[1505]: time="2025-05-08T00:09:38.684109166Z" level=info msg="StopPodSandbox for \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\"" May 8 00:09:38.684239 containerd[1505]: time="2025-05-08T00:09:38.684218297Z" level=info msg="TearDown network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\" successfully" May 8 00:09:38.684239 containerd[1505]: time="2025-05-08T00:09:38.684234598Z" level=info msg="StopPodSandbox for \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\" returns successfully" May 8 00:09:38.684669 containerd[1505]: time="2025-05-08T00:09:38.684622727Z" level=info msg="RemovePodSandbox for \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\"" May 8 00:09:38.684727 containerd[1505]: time="2025-05-08T00:09:38.684678224Z" level=info msg="Forcibly stopping sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\"" May 8 00:09:38.684880 containerd[1505]: time="2025-05-08T00:09:38.684822643Z" level=info msg="TearDown network for sandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\" successfully" May 8 00:09:38.689793 containerd[1505]: time="2025-05-08T00:09:38.689742725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.689962 containerd[1505]: time="2025-05-08T00:09:38.689816417Z" level=info msg="RemovePodSandbox \"e57f8a6ac7b56a5028e70fc59134f6c8496be523a151d4a628eb619a8f40edba\" returns successfully" May 8 00:09:38.690310 containerd[1505]: time="2025-05-08T00:09:38.690159248Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:38.690310 containerd[1505]: time="2025-05-08T00:09:38.690256286Z" level=info msg="TearDown network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" successfully" May 8 00:09:38.690310 containerd[1505]: time="2025-05-08T00:09:38.690271355Z" level=info msg="StopPodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" returns successfully" May 8 00:09:38.690549 containerd[1505]: time="2025-05-08T00:09:38.690526016Z" level=info msg="RemovePodSandbox for \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:38.690637 containerd[1505]: time="2025-05-08T00:09:38.690552327Z" level=info msg="Forcibly stopping sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\"" May 8 00:09:38.690703 containerd[1505]: time="2025-05-08T00:09:38.690649826Z" level=info msg="TearDown network for sandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" successfully" May 8 00:09:38.694920 containerd[1505]: time="2025-05-08T00:09:38.694882191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.694995 containerd[1505]: time="2025-05-08T00:09:38.694921066Z" level=info msg="RemovePodSandbox \"b03d67240de51fa8fc0eb1186228d56f83ff270a31fc55b4c27b3852a071d046\" returns successfully" May 8 00:09:38.695263 containerd[1505]: time="2025-05-08T00:09:38.695236965Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" May 8 00:09:38.695360 containerd[1505]: time="2025-05-08T00:09:38.695341487Z" level=info msg="TearDown network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" successfully" May 8 00:09:38.695391 containerd[1505]: time="2025-05-08T00:09:38.695359282Z" level=info msg="StopPodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" returns successfully" May 8 00:09:38.695697 containerd[1505]: time="2025-05-08T00:09:38.695637428Z" level=info msg="RemovePodSandbox for \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" May 8 00:09:38.695697 containerd[1505]: time="2025-05-08T00:09:38.695675061Z" level=info msg="Forcibly stopping sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\"" May 8 00:09:38.696050 containerd[1505]: time="2025-05-08T00:09:38.695762499Z" level=info msg="TearDown network for sandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" successfully" May 8 00:09:38.700244 containerd[1505]: time="2025-05-08T00:09:38.700201945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.700302 containerd[1505]: time="2025-05-08T00:09:38.700253694Z" level=info msg="RemovePodSandbox \"2fb76e3590e1896ad63800ee92649bab322bb9f1d0aaed8344912a62e5f4accf\" returns successfully" May 8 00:09:38.700530 containerd[1505]: time="2025-05-08T00:09:38.700491653Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\"" May 8 00:09:38.700647 containerd[1505]: time="2025-05-08T00:09:38.700621534Z" level=info msg="TearDown network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" successfully" May 8 00:09:38.700647 containerd[1505]: time="2025-05-08T00:09:38.700641171Z" level=info msg="StopPodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" returns successfully" May 8 00:09:38.700943 containerd[1505]: time="2025-05-08T00:09:38.700915631Z" level=info msg="RemovePodSandbox for \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\"" May 8 00:09:38.700993 containerd[1505]: time="2025-05-08T00:09:38.700940419Z" level=info msg="Forcibly stopping sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\"" May 8 00:09:38.701066 containerd[1505]: time="2025-05-08T00:09:38.701026054Z" level=info msg="TearDown network for sandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" successfully" May 8 00:09:38.705542 containerd[1505]: time="2025-05-08T00:09:38.705504023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.705609 containerd[1505]: time="2025-05-08T00:09:38.705542819Z" level=info msg="RemovePodSandbox \"1bbdd8a2f6596e6a701daf7f298fdbe98e948568198f92d4322c89321c4cd110\" returns successfully" May 8 00:09:38.705912 containerd[1505]: time="2025-05-08T00:09:38.705860322Z" level=info msg="StopPodSandbox for \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\"" May 8 00:09:38.706042 containerd[1505]: time="2025-05-08T00:09:38.705962949Z" level=info msg="TearDown network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" successfully" May 8 00:09:38.706042 containerd[1505]: time="2025-05-08T00:09:38.705975203Z" level=info msg="StopPodSandbox for \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" returns successfully" May 8 00:09:38.706293 containerd[1505]: time="2025-05-08T00:09:38.706258770Z" level=info msg="RemovePodSandbox for \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\"" May 8 00:09:38.706329 containerd[1505]: time="2025-05-08T00:09:38.706294619Z" level=info msg="Forcibly stopping sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\"" May 8 00:09:38.706415 containerd[1505]: time="2025-05-08T00:09:38.706379763Z" level=info msg="TearDown network for sandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" successfully" May 8 00:09:38.710267 containerd[1505]: time="2025-05-08T00:09:38.710236885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.710340 containerd[1505]: time="2025-05-08T00:09:38.710276371Z" level=info msg="RemovePodSandbox \"13acd4157037be1ff3f8af3f021c36bc8a063fe31fafa01f4e0fc636376bc660\" returns successfully" May 8 00:09:38.710636 containerd[1505]: time="2025-05-08T00:09:38.710577041Z" level=info msg="StopPodSandbox for \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\"" May 8 00:09:38.710761 containerd[1505]: time="2025-05-08T00:09:38.710739295Z" level=info msg="TearDown network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\" successfully" May 8 00:09:38.710800 containerd[1505]: time="2025-05-08T00:09:38.710758442Z" level=info msg="StopPodSandbox for \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\" returns successfully" May 8 00:09:38.711021 containerd[1505]: time="2025-05-08T00:09:38.710982143Z" level=info msg="RemovePodSandbox for \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\"" May 8 00:09:38.711021 containerd[1505]: time="2025-05-08T00:09:38.711009305Z" level=info msg="Forcibly stopping sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\"" May 8 00:09:38.711129 containerd[1505]: time="2025-05-08T00:09:38.711084450Z" level=info msg="TearDown network for sandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\" successfully" May 8 00:09:38.715063 containerd[1505]: time="2025-05-08T00:09:38.715016777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.715063 containerd[1505]: time="2025-05-08T00:09:38.715059920Z" level=info msg="RemovePodSandbox \"4c204cb226d4da609e269577cbb2d6ebe762ce1fb1d06a5db690f645c2d5c762\" returns successfully" May 8 00:09:38.715390 containerd[1505]: time="2025-05-08T00:09:38.715360210Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:38.715491 containerd[1505]: time="2025-05-08T00:09:38.715466183Z" level=info msg="TearDown network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" successfully" May 8 00:09:38.715491 containerd[1505]: time="2025-05-08T00:09:38.715481984Z" level=info msg="StopPodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" returns successfully" May 8 00:09:38.715950 containerd[1505]: time="2025-05-08T00:09:38.715884872Z" level=info msg="RemovePodSandbox for \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:38.715950 containerd[1505]: time="2025-05-08T00:09:38.715910361Z" level=info msg="Forcibly stopping sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\"" May 8 00:09:38.716118 containerd[1505]: time="2025-05-08T00:09:38.715985526Z" level=info msg="TearDown network for sandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" successfully" May 8 00:09:38.720238 containerd[1505]: time="2025-05-08T00:09:38.720194706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.720238 containerd[1505]: time="2025-05-08T00:09:38.720251255Z" level=info msg="RemovePodSandbox \"40fd39140f1ee9a6032e0032237f42628b0d94ae2e17ab411d4079bb59e0a195\" returns successfully" May 8 00:09:38.720619 containerd[1505]: time="2025-05-08T00:09:38.720546365Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" May 8 00:09:38.720697 containerd[1505]: time="2025-05-08T00:09:38.720664253Z" level=info msg="TearDown network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" successfully" May 8 00:09:38.720697 containerd[1505]: time="2025-05-08T00:09:38.720677338Z" level=info msg="StopPodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" returns successfully" May 8 00:09:38.720981 containerd[1505]: time="2025-05-08T00:09:38.720955194Z" level=info msg="RemovePodSandbox for \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" May 8 00:09:38.720981 containerd[1505]: time="2025-05-08T00:09:38.720979792Z" level=info msg="Forcibly stopping sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\"" May 8 00:09:38.721094 containerd[1505]: time="2025-05-08T00:09:38.721057702Z" level=info msg="TearDown network for sandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" successfully" May 8 00:09:38.725097 containerd[1505]: time="2025-05-08T00:09:38.725055835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.725173 containerd[1505]: time="2025-05-08T00:09:38.725111532Z" level=info msg="RemovePodSandbox \"c458928b1f50c71b1ea5c82db22924cd269793941f36623a768c0b55ce0ab1f8\" returns successfully" May 8 00:09:38.725441 containerd[1505]: time="2025-05-08T00:09:38.725416000Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\"" May 8 00:09:38.725536 containerd[1505]: time="2025-05-08T00:09:38.725510944Z" level=info msg="TearDown network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" successfully" May 8 00:09:38.725536 containerd[1505]: time="2025-05-08T00:09:38.725523258Z" level=info msg="StopPodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" returns successfully" May 8 00:09:38.725818 containerd[1505]: time="2025-05-08T00:09:38.725793128Z" level=info msg="RemovePodSandbox for \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\"" May 8 00:09:38.725888 containerd[1505]: time="2025-05-08T00:09:38.725821864Z" level=info msg="Forcibly stopping sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\"" May 8 00:09:38.725954 containerd[1505]: time="2025-05-08T00:09:38.725904994Z" level=info msg="TearDown network for sandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" successfully" May 8 00:09:38.729772 containerd[1505]: time="2025-05-08T00:09:38.729737157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.729840 containerd[1505]: time="2025-05-08T00:09:38.729780931Z" level=info msg="RemovePodSandbox \"1d441a98e9a13d3b85839cd4fea0823d0cafc6e4b9a8fd62f7f0d4f65b1bfb05\" returns successfully" May 8 00:09:38.730083 containerd[1505]: time="2025-05-08T00:09:38.730055642Z" level=info msg="StopPodSandbox for \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\"" May 8 00:09:38.730193 containerd[1505]: time="2025-05-08T00:09:38.730167898Z" level=info msg="TearDown network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" successfully" May 8 00:09:38.730193 containerd[1505]: time="2025-05-08T00:09:38.730184570Z" level=info msg="StopPodSandbox for \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" returns successfully" May 8 00:09:38.730499 containerd[1505]: time="2025-05-08T00:09:38.730472956Z" level=info msg="RemovePodSandbox for \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\"" May 8 00:09:38.730537 containerd[1505]: time="2025-05-08T00:09:38.730502113Z" level=info msg="Forcibly stopping sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\"" May 8 00:09:38.730625 containerd[1505]: time="2025-05-08T00:09:38.730575014Z" level=info msg="TearDown network for sandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" successfully" May 8 00:09:38.734274 containerd[1505]: time="2025-05-08T00:09:38.734234474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.734320 containerd[1505]: time="2025-05-08T00:09:38.734286234Z" level=info msg="RemovePodSandbox \"8c63da5736498491a842025315c3885e9be8504c5605d90888bd11017e30aff3\" returns successfully" May 8 00:09:38.734570 containerd[1505]: time="2025-05-08T00:09:38.734547898Z" level=info msg="StopPodSandbox for \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\"" May 8 00:09:38.734718 containerd[1505]: time="2025-05-08T00:09:38.734679933Z" level=info msg="TearDown network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\" successfully" May 8 00:09:38.734718 containerd[1505]: time="2025-05-08T00:09:38.734711564Z" level=info msg="StopPodSandbox for \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\" returns successfully" May 8 00:09:38.735022 containerd[1505]: time="2025-05-08T00:09:38.734990753Z" level=info msg="RemovePodSandbox for \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\"" May 8 00:09:38.735071 containerd[1505]: time="2025-05-08T00:09:38.735022244Z" level=info msg="Forcibly stopping sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\"" May 8 00:09:38.735142 containerd[1505]: time="2025-05-08T00:09:38.735103971Z" level=info msg="TearDown network for sandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\" successfully" May 8 00:09:38.739334 containerd[1505]: time="2025-05-08T00:09:38.739294546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.739393 containerd[1505]: time="2025-05-08T00:09:38.739340536Z" level=info msg="RemovePodSandbox \"ef5994fc67f1a89100c26df9f9425d5b19bb3f85bd0f46b4491f35f38af8d835\" returns successfully" May 8 00:09:38.739626 containerd[1505]: time="2025-05-08T00:09:38.739603733Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:38.739723 containerd[1505]: time="2025-05-08T00:09:38.739706260Z" level=info msg="TearDown network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" successfully" May 8 00:09:38.739754 containerd[1505]: time="2025-05-08T00:09:38.739721370Z" level=info msg="StopPodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" returns successfully" May 8 00:09:38.740015 containerd[1505]: time="2025-05-08T00:09:38.739991692Z" level=info msg="RemovePodSandbox for \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:38.740015 containerd[1505]: time="2025-05-08T00:09:38.740011310Z" level=info msg="Forcibly stopping sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\"" May 8 00:09:38.740112 containerd[1505]: time="2025-05-08T00:09:38.740079390Z" level=info msg="TearDown network for sandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" successfully" May 8 00:09:38.743819 containerd[1505]: time="2025-05-08T00:09:38.743788226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.743931 containerd[1505]: time="2025-05-08T00:09:38.743825638Z" level=info msg="RemovePodSandbox \"6d44832847e5f79971ff3cb42c781b4468ffe6b551a6d25edc2d8e3d2bd0d9db\" returns successfully" May 8 00:09:38.744122 containerd[1505]: time="2025-05-08T00:09:38.744096210Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" May 8 00:09:38.744193 containerd[1505]: time="2025-05-08T00:09:38.744177457Z" level=info msg="TearDown network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" successfully" May 8 00:09:38.744193 containerd[1505]: time="2025-05-08T00:09:38.744190512Z" level=info msg="StopPodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" returns successfully" May 8 00:09:38.744392 containerd[1505]: time="2025-05-08T00:09:38.744373265Z" level=info msg="RemovePodSandbox for \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" May 8 00:09:38.744427 containerd[1505]: time="2025-05-08T00:09:38.744391861Z" level=info msg="Forcibly stopping sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\"" May 8 00:09:38.744483 containerd[1505]: time="2025-05-08T00:09:38.744454932Z" level=info msg="TearDown network for sandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" successfully" May 8 00:09:38.748197 containerd[1505]: time="2025-05-08T00:09:38.748152637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.748197 containerd[1505]: time="2025-05-08T00:09:38.748193084Z" level=info msg="RemovePodSandbox \"db91feb850b4e1f1d819229cd60b7db534413a9d78dc2dac7f000587024c68cc\" returns successfully" May 8 00:09:38.748554 containerd[1505]: time="2025-05-08T00:09:38.748526498Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\"" May 8 00:09:38.748660 containerd[1505]: time="2025-05-08T00:09:38.748635298Z" level=info msg="TearDown network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" successfully" May 8 00:09:38.748660 containerd[1505]: time="2025-05-08T00:09:38.748651690Z" level=info msg="StopPodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" returns successfully" May 8 00:09:38.749608 containerd[1505]: time="2025-05-08T00:09:38.748919326Z" level=info msg="RemovePodSandbox for \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\"" May 8 00:09:38.749608 containerd[1505]: time="2025-05-08T00:09:38.748942350Z" level=info msg="Forcibly stopping sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\"" May 8 00:09:38.749608 containerd[1505]: time="2025-05-08T00:09:38.749021282Z" level=info msg="TearDown network for sandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" successfully" May 8 00:09:38.752727 containerd[1505]: time="2025-05-08T00:09:38.752696964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.752773 containerd[1505]: time="2025-05-08T00:09:38.752735659Z" level=info msg="RemovePodSandbox \"689f2b83f25ea02990567bd69e744cf6e584c080f266c411aaba192895a5ce6c\" returns successfully" May 8 00:09:38.753126 containerd[1505]: time="2025-05-08T00:09:38.753087628Z" level=info msg="StopPodSandbox for \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\"" May 8 00:09:38.753302 containerd[1505]: time="2025-05-08T00:09:38.753236585Z" level=info msg="TearDown network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" successfully" May 8 00:09:38.753302 containerd[1505]: time="2025-05-08T00:09:38.753293014Z" level=info msg="StopPodSandbox for \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" returns successfully" May 8 00:09:38.753629 containerd[1505]: time="2025-05-08T00:09:38.753607481Z" level=info msg="RemovePodSandbox for \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\"" May 8 00:09:38.753678 containerd[1505]: time="2025-05-08T00:09:38.753631417Z" level=info msg="Forcibly stopping sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\"" May 8 00:09:38.753756 containerd[1505]: time="2025-05-08T00:09:38.753714768Z" level=info msg="TearDown network for sandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" successfully" May 8 00:09:38.757395 containerd[1505]: time="2025-05-08T00:09:38.757374338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.757479 containerd[1505]: time="2025-05-08T00:09:38.757409416Z" level=info msg="RemovePodSandbox \"d772c7a0c77343cfd6b7b8001457c3fc80a71719d98f08ac8e8f09fa337c9452\" returns successfully" May 8 00:09:38.757804 containerd[1505]: time="2025-05-08T00:09:38.757777997Z" level=info msg="StopPodSandbox for \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\"" May 8 00:09:38.757867 containerd[1505]: time="2025-05-08T00:09:38.757858111Z" level=info msg="TearDown network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\" successfully" May 8 00:09:38.757900 containerd[1505]: time="2025-05-08T00:09:38.757867189Z" level=info msg="StopPodSandbox for \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\" returns successfully" May 8 00:09:38.758196 containerd[1505]: time="2025-05-08T00:09:38.758147059Z" level=info msg="RemovePodSandbox for \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\"" May 8 00:09:38.758196 containerd[1505]: time="2025-05-08T00:09:38.758176976Z" level=info msg="Forcibly stopping sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\"" May 8 00:09:38.758333 containerd[1505]: time="2025-05-08T00:09:38.758289503Z" level=info msg="TearDown network for sandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\" successfully" May 8 00:09:38.762766 containerd[1505]: time="2025-05-08T00:09:38.762723318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.762816 containerd[1505]: time="2025-05-08T00:09:38.762785328Z" level=info msg="RemovePodSandbox \"370ec9cd93d24149ed7966601d74dd5891ad53aba009412149339f104af553e9\" returns successfully" May 8 00:09:38.763179 containerd[1505]: time="2025-05-08T00:09:38.763146554Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:38.763262 containerd[1505]: time="2025-05-08T00:09:38.763244864Z" level=info msg="TearDown network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" successfully" May 8 00:09:38.763262 containerd[1505]: time="2025-05-08T00:09:38.763260214Z" level=info msg="StopPodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" returns successfully" May 8 00:09:38.763545 containerd[1505]: time="2025-05-08T00:09:38.763518773Z" level=info msg="RemovePodSandbox for \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:38.763601 containerd[1505]: time="2025-05-08T00:09:38.763545214Z" level=info msg="Forcibly stopping sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\"" May 8 00:09:38.763692 containerd[1505]: time="2025-05-08T00:09:38.763648002Z" level=info msg="TearDown network for sandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" successfully" May 8 00:09:38.767471 containerd[1505]: time="2025-05-08T00:09:38.767446731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.767537 containerd[1505]: time="2025-05-08T00:09:38.767484323Z" level=info msg="RemovePodSandbox \"bac713aeaf9a5227c2cbe73a49bafd7fa45c29f3267eaf58e9d091f9f1298149\" returns successfully" May 8 00:09:38.767845 containerd[1505]: time="2025-05-08T00:09:38.767816945Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" May 8 00:09:38.767940 containerd[1505]: time="2025-05-08T00:09:38.767916397Z" level=info msg="TearDown network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" successfully" May 8 00:09:38.767940 containerd[1505]: time="2025-05-08T00:09:38.767932598Z" level=info msg="StopPodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" returns successfully" May 8 00:09:38.768184 containerd[1505]: time="2025-05-08T00:09:38.768160928Z" level=info msg="RemovePodSandbox for \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" May 8 00:09:38.768184 containerd[1505]: time="2025-05-08T00:09:38.768182862Z" level=info msg="Forcibly stopping sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\"" May 8 00:09:38.768285 containerd[1505]: time="2025-05-08T00:09:38.768250121Z" level=info msg="TearDown network for sandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" successfully" May 8 00:09:38.772062 containerd[1505]: time="2025-05-08T00:09:38.772032979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.772110 containerd[1505]: time="2025-05-08T00:09:38.772072265Z" level=info msg="RemovePodSandbox \"ed08d024f4f23bc0df2cd95f0f30fd56b3448e321104e173bfd266a44ce27ecd\" returns successfully" May 8 00:09:38.772324 containerd[1505]: time="2025-05-08T00:09:38.772302459Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\"" May 8 00:09:38.772423 containerd[1505]: time="2025-05-08T00:09:38.772394727Z" level=info msg="TearDown network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" successfully" May 8 00:09:38.772423 containerd[1505]: time="2025-05-08T00:09:38.772412822Z" level=info msg="StopPodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" returns successfully" May 8 00:09:38.772837 containerd[1505]: time="2025-05-08T00:09:38.772812092Z" level=info msg="RemovePodSandbox for \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\"" May 8 00:09:38.772894 containerd[1505]: time="2025-05-08T00:09:38.772837491Z" level=info msg="Forcibly stopping sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\"" May 8 00:09:38.772946 containerd[1505]: time="2025-05-08T00:09:38.772915262Z" level=info msg="TearDown network for sandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" successfully" May 8 00:09:38.776801 containerd[1505]: time="2025-05-08T00:09:38.776758827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.776801 containerd[1505]: time="2025-05-08T00:09:38.776799305Z" level=info msg="RemovePodSandbox \"4580374a8fc5a405a550fbe4016d5b18429feb252bf247314d044436b4fea856\" returns successfully" May 8 00:09:38.777101 containerd[1505]: time="2025-05-08T00:09:38.777075337Z" level=info msg="StopPodSandbox for \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\"" May 8 00:09:38.777189 containerd[1505]: time="2025-05-08T00:09:38.777170962Z" level=info msg="TearDown network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" successfully" May 8 00:09:38.777189 containerd[1505]: time="2025-05-08T00:09:38.777186111Z" level=info msg="StopPodSandbox for \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" returns successfully" May 8 00:09:38.777489 containerd[1505]: time="2025-05-08T00:09:38.777449659Z" level=info msg="RemovePodSandbox for \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\"" May 8 00:09:38.777489 containerd[1505]: time="2025-05-08T00:09:38.777473797Z" level=info msg="Forcibly stopping sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\"" May 8 00:09:38.777655 containerd[1505]: time="2025-05-08T00:09:38.777548119Z" level=info msg="TearDown network for sandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" successfully" May 8 00:09:38.781893 containerd[1505]: time="2025-05-08T00:09:38.781852044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.781954 containerd[1505]: time="2025-05-08T00:09:38.781918481Z" level=info msg="RemovePodSandbox \"e1ab5c5d66d3412b753305c048a3260a102a746c0fe5327e275e158c92b9e81a\" returns successfully" May 8 00:09:38.782260 containerd[1505]: time="2025-05-08T00:09:38.782234611Z" level=info msg="StopPodSandbox for \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\"" May 8 00:09:38.782383 containerd[1505]: time="2025-05-08T00:09:38.782333453Z" level=info msg="TearDown network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\" successfully" May 8 00:09:38.782411 containerd[1505]: time="2025-05-08T00:09:38.782381845Z" level=info msg="StopPodSandbox for \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\" returns successfully" May 8 00:09:38.782677 containerd[1505]: time="2025-05-08T00:09:38.782647538Z" level=info msg="RemovePodSandbox for \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\"" May 8 00:09:38.782677 containerd[1505]: time="2025-05-08T00:09:38.782671364Z" level=info msg="Forcibly stopping sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\"" May 8 00:09:38.782776 containerd[1505]: time="2025-05-08T00:09:38.782744125Z" level=info msg="TearDown network for sandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\" successfully" May 8 00:09:38.786647 containerd[1505]: time="2025-05-08T00:09:38.786609963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.786730 containerd[1505]: time="2025-05-08T00:09:38.786656693Z" level=info msg="RemovePodSandbox \"b9a6d0427d822b96f0593326e6e5026638316dbb32b4c55c1eb227dbe31fa107\" returns successfully" May 8 00:09:38.787018 containerd[1505]: time="2025-05-08T00:09:38.786979696Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:38.787118 containerd[1505]: time="2025-05-08T00:09:38.787096391Z" level=info msg="TearDown network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" successfully" May 8 00:09:38.787118 containerd[1505]: time="2025-05-08T00:09:38.787113424Z" level=info msg="StopPodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" returns successfully" May 8 00:09:38.787397 containerd[1505]: time="2025-05-08T00:09:38.787375199Z" level=info msg="RemovePodSandbox for \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:38.787433 containerd[1505]: time="2025-05-08T00:09:38.787397622Z" level=info msg="Forcibly stopping sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\"" May 8 00:09:38.787499 containerd[1505]: time="2025-05-08T00:09:38.787478859Z" level=info msg="TearDown network for sandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" successfully" May 8 00:09:38.791529 containerd[1505]: time="2025-05-08T00:09:38.791484588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.791578 containerd[1505]: time="2025-05-08T00:09:38.791542028Z" level=info msg="RemovePodSandbox \"498a4893e8a6af40fdea4c0bf8d91fdd5e74be2adce6f043e8c41f595608957a\" returns successfully" May 8 00:09:38.791822 containerd[1505]: time="2025-05-08T00:09:38.791793244Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" May 8 00:09:38.791898 containerd[1505]: time="2025-05-08T00:09:38.791880441Z" level=info msg="TearDown network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" successfully" May 8 00:09:38.791898 containerd[1505]: time="2025-05-08T00:09:38.791894939Z" level=info msg="StopPodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" returns successfully" May 8 00:09:38.792117 containerd[1505]: time="2025-05-08T00:09:38.792087831Z" level=info msg="RemovePodSandbox for \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" May 8 00:09:38.792117 containerd[1505]: time="2025-05-08T00:09:38.792107820Z" level=info msg="Forcibly stopping sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\"" May 8 00:09:38.792278 containerd[1505]: time="2025-05-08T00:09:38.792167485Z" level=info msg="TearDown network for sandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" successfully" May 8 00:09:38.795895 containerd[1505]: time="2025-05-08T00:09:38.795866692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.795984 containerd[1505]: time="2025-05-08T00:09:38.795902641Z" level=info msg="RemovePodSandbox \"9352f7ab0f10cb03bbf97ff42de8e754493ea08129846587bbe14a230d35b516\" returns successfully" May 8 00:09:38.796189 containerd[1505]: time="2025-05-08T00:09:38.796170829Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\"" May 8 00:09:38.796280 containerd[1505]: time="2025-05-08T00:09:38.796262847Z" level=info msg="TearDown network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" successfully" May 8 00:09:38.796280 containerd[1505]: time="2025-05-08T00:09:38.796277204Z" level=info msg="StopPodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" returns successfully" May 8 00:09:38.796609 containerd[1505]: time="2025-05-08T00:09:38.796559188Z" level=info msg="RemovePodSandbox for \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\"" May 8 00:09:38.796676 containerd[1505]: time="2025-05-08T00:09:38.796628592Z" level=info msg="Forcibly stopping sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\"" May 8 00:09:38.796774 containerd[1505]: time="2025-05-08T00:09:38.796734286Z" level=info msg="TearDown network for sandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" successfully" May 8 00:09:38.800827 containerd[1505]: time="2025-05-08T00:09:38.800789991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.800899 containerd[1505]: time="2025-05-08T00:09:38.800849666Z" level=info msg="RemovePodSandbox \"7e387138ee2bc31c857e7ac7e57552e0f1614d16a7edcd55f4407e83fbd4e7a1\" returns successfully" May 8 00:09:38.801170 containerd[1505]: time="2025-05-08T00:09:38.801142611Z" level=info msg="StopPodSandbox for \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\"" May 8 00:09:38.801254 containerd[1505]: time="2025-05-08T00:09:38.801221133Z" level=info msg="TearDown network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" successfully" May 8 00:09:38.801254 containerd[1505]: time="2025-05-08T00:09:38.801231222Z" level=info msg="StopPodSandbox for \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" returns successfully" May 8 00:09:38.801562 containerd[1505]: time="2025-05-08T00:09:38.801530158Z" level=info msg="RemovePodSandbox for \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\"" May 8 00:09:38.801634 containerd[1505]: time="2025-05-08T00:09:38.801564274Z" level=info msg="Forcibly stopping sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\"" May 8 00:09:38.801732 containerd[1505]: time="2025-05-08T00:09:38.801670339Z" level=info msg="TearDown network for sandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" successfully" May 8 00:09:38.806263 containerd[1505]: time="2025-05-08T00:09:38.806225999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.806318 containerd[1505]: time="2025-05-08T00:09:38.806282808Z" level=info msg="RemovePodSandbox \"2de835992095b9fbf6441a761e32c68364d0ec0a012693336f911edf7c5bb032\" returns successfully" May 8 00:09:38.806620 containerd[1505]: time="2025-05-08T00:09:38.806573298Z" level=info msg="StopPodSandbox for \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\"" May 8 00:09:38.806733 containerd[1505]: time="2025-05-08T00:09:38.806712086Z" level=info msg="TearDown network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\" successfully" May 8 00:09:38.806758 containerd[1505]: time="2025-05-08T00:09:38.806731594Z" level=info msg="StopPodSandbox for \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\" returns successfully" May 8 00:09:38.807112 containerd[1505]: time="2025-05-08T00:09:38.807059165Z" level=info msg="RemovePodSandbox for \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\"" May 8 00:09:38.807112 containerd[1505]: time="2025-05-08T00:09:38.807091438Z" level=info msg="Forcibly stopping sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\"" May 8 00:09:38.807226 containerd[1505]: time="2025-05-08T00:09:38.807188675Z" level=info msg="TearDown network for sandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\" successfully" May 8 00:09:38.811030 containerd[1505]: time="2025-05-08T00:09:38.810992305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:09:38.811068 containerd[1505]: time="2025-05-08T00:09:38.811040447Z" level=info msg="RemovePodSandbox \"c6d2e3db71c4cf3281e38c6a06cc3c89328d2f643cddac8b9aa1f2619f56276a\" returns successfully" May 8 00:09:40.327120 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:49766.service - OpenSSH per-connection server daemon (10.0.0.1:49766). May 8 00:09:40.371866 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 49766 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:40.373745 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:40.378903 systemd-logind[1492]: New session 16 of user core. May 8 00:09:40.389854 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:09:40.519872 sshd[5745]: Connection closed by 10.0.0.1 port 49766 May 8 00:09:40.520662 sshd-session[5743]: pam_unix(sshd:session): session closed for user core May 8 00:09:40.530513 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:49766.service: Deactivated successfully. May 8 00:09:40.532690 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:09:40.534272 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. May 8 00:09:40.544097 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:49772.service - OpenSSH per-connection server daemon (10.0.0.1:49772). May 8 00:09:40.545395 systemd-logind[1492]: Removed session 16. May 8 00:09:40.584396 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 49772 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:40.586460 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:40.591600 systemd-logind[1492]: New session 17 of user core. May 8 00:09:40.598854 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:09:40.878177 sshd[5760]: Connection closed by 10.0.0.1 port 49772 May 8 00:09:40.878959 sshd-session[5757]: pam_unix(sshd:session): session closed for user core May 8 00:09:40.890831 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:49772.service: Deactivated successfully. May 8 00:09:40.893428 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:09:40.894387 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. May 8 00:09:40.907891 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:49782.service - OpenSSH per-connection server daemon (10.0.0.1:49782). May 8 00:09:40.909056 systemd-logind[1492]: Removed session 17. May 8 00:09:40.948435 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 49782 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:40.950088 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:40.954746 systemd-logind[1492]: New session 18 of user core. May 8 00:09:40.963806 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:09:42.550506 sshd[5774]: Connection closed by 10.0.0.1 port 49782 May 8 00:09:42.551553 sshd-session[5771]: pam_unix(sshd:session): session closed for user core May 8 00:09:42.561020 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:49782.service: Deactivated successfully. May 8 00:09:42.567256 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:09:42.567721 systemd[1]: session-18.scope: Consumed 666ms CPU time, 69M memory peak. May 8 00:09:42.568639 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. May 8 00:09:42.581101 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:49784.service - OpenSSH per-connection server daemon (10.0.0.1:49784). May 8 00:09:42.584159 systemd-logind[1492]: Removed session 18. May 8 00:09:42.622998 sshd[5806]: Accepted publickey for core from 10.0.0.1 port 49784 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:42.625111 sshd-session[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:42.630491 systemd-logind[1492]: New session 19 of user core. May 8 00:09:42.640874 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:09:43.045137 sshd[5809]: Connection closed by 10.0.0.1 port 49784 May 8 00:09:43.048943 sshd-session[5806]: pam_unix(sshd:session): session closed for user core May 8 00:09:43.062989 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:49784.service: Deactivated successfully. May 8 00:09:43.068447 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:09:43.071867 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. May 8 00:09:43.079066 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:49794.service - OpenSSH per-connection server daemon (10.0.0.1:49794). May 8 00:09:43.081979 systemd-logind[1492]: Removed session 19. May 8 00:09:43.118651 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 49794 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:43.121106 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:43.128885 systemd-logind[1492]: New session 20 of user core. May 8 00:09:43.139979 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:09:43.277784 sshd[5822]: Connection closed by 10.0.0.1 port 49794 May 8 00:09:43.278246 sshd-session[5819]: pam_unix(sshd:session): session closed for user core May 8 00:09:43.283567 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:49794.service: Deactivated successfully. May 8 00:09:43.286308 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:09:43.287078 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. May 8 00:09:43.288144 systemd-logind[1492]: Removed session 20. May 8 00:09:48.300111 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:49538.service - OpenSSH per-connection server daemon (10.0.0.1:49538). May 8 00:09:48.345919 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 49538 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:48.347721 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:48.352702 systemd-logind[1492]: New session 21 of user core. May 8 00:09:48.362735 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:09:48.479123 sshd[5860]: Connection closed by 10.0.0.1 port 49538 May 8 00:09:48.479548 sshd-session[5858]: pam_unix(sshd:session): session closed for user core May 8 00:09:48.483934 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:49538.service: Deactivated successfully. May 8 00:09:48.486406 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:09:48.487349 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. May 8 00:09:48.488637 systemd-logind[1492]: Removed session 21. May 8 00:09:50.639744 kubelet[2608]: E0508 00:09:50.639694 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:09:53.498013 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:49590.service - OpenSSH per-connection server daemon (10.0.0.1:49590). May 8 00:09:53.538722 sshd[5884]: Accepted publickey for core from 10.0.0.1 port 49590 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:53.540444 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:53.545408 systemd-logind[1492]: New session 22 of user core. May 8 00:09:53.553782 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:09:53.671055 sshd[5886]: Connection closed by 10.0.0.1 port 49590 May 8 00:09:53.671480 sshd-session[5884]: pam_unix(sshd:session): session closed for user core May 8 00:09:53.676267 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:49590.service: Deactivated successfully. May 8 00:09:53.678911 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:09:53.679760 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. May 8 00:09:53.680954 systemd-logind[1492]: Removed session 22. May 8 00:09:56.791267 kubelet[2608]: I0508 00:09:56.791207 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:09:58.696322 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:58660.service - OpenSSH per-connection server daemon (10.0.0.1:58660). May 8 00:09:58.737323 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 58660 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:09:58.739224 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:58.744406 systemd-logind[1492]: New session 23 of user core. May 8 00:09:58.749914 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:09:58.874988 sshd[5904]: Connection closed by 10.0.0.1 port 58660 May 8 00:09:58.875559 sshd-session[5902]: pam_unix(sshd:session): session closed for user core May 8 00:09:58.880000 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:58660.service: Deactivated successfully. May 8 00:09:58.882866 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:09:58.883741 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. May 8 00:09:58.885168 systemd-logind[1492]: Removed session 23. May 8 00:10:01.928877 kubelet[2608]: E0508 00:10:01.928841 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:10:03.890286 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:58670.service - OpenSSH per-connection server daemon (10.0.0.1:58670). May 8 00:10:03.942332 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 58670 ssh2: RSA SHA256:kwO0JqEIt1ObdnqYCFs6QolAz4wrphlF1QS6lWhQBXI May 8 00:10:03.944405 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:10:03.949885 systemd-logind[1492]: New session 24 of user core. May 8 00:10:03.957744 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:10:04.083278 sshd[5961]: Connection closed by 10.0.0.1 port 58670 May 8 00:10:04.083735 sshd-session[5959]: pam_unix(sshd:session): session closed for user core May 8 00:10:04.089417 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:58670.service: Deactivated successfully. May 8 00:10:04.092099 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:10:04.092968 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. May 8 00:10:04.093930 systemd-logind[1492]: Removed session 24.