May 8 00:39:38.911835 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:39:38.911857 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:39:38.911865 kernel: BIOS-provided physical RAM map: May 8 00:39:38.911872 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 8 00:39:38.911877 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 8 00:39:38.911885 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:39:38.911892 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 8 00:39:38.911897 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 8 00:39:38.911903 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:39:38.911909 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:39:38.911915 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:39:38.911920 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:39:38.911926 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 8 00:39:38.911931 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:39:38.911940 kernel: NX (Execute Disable) protection: active May 8 00:39:38.911946 kernel: APIC: Static calls initialized May 8 00:39:38.911952 kernel: SMBIOS 2.8 present. May 8 00:39:38.911958 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 8 00:39:38.911964 kernel: Hypervisor detected: KVM May 8 00:39:38.911972 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:39:38.911978 kernel: kvm-clock: using sched offset of 4605549430 cycles May 8 00:39:38.911984 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:39:38.911991 kernel: tsc: Detected 2000.002 MHz processor May 8 00:39:38.911997 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:39:38.912004 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:39:38.912010 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 8 00:39:38.912017 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 00:39:38.912023 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:39:38.912051 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 8 00:39:38.912057 kernel: Using GB pages for direct mapping May 8 00:39:38.912063 kernel: ACPI: Early table checksum verification disabled May 8 00:39:38.912069 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 8 00:39:38.912076 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:38.912082 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:38.912088 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:38.912094 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 8 00:39:38.912100 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:38.912109 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:38.912115 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:38.912121 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:38.912131 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 8 00:39:38.912137 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 8 00:39:38.912144 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 8 00:39:38.912150 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 8 00:39:38.912159 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 8 00:39:38.912165 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 8 00:39:38.912171 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 8 00:39:38.912178 kernel: No NUMA configuration found May 8 00:39:38.912184 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 8 00:39:38.912190 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] May 8 00:39:38.912197 kernel: Zone ranges: May 8 00:39:38.912203 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:39:38.912212 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 00:39:38.912218 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 8 00:39:38.912225 kernel: Movable zone start for each node May 8 00:39:38.912231 kernel: Early memory node ranges May 8 00:39:38.912237 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:39:38.912244 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 8 00:39:38.912250 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 8 00:39:38.912256 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 8 00:39:38.912263 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:39:38.912271 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:39:38.912278 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 8 00:39:38.912284 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:39:38.912290 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:39:38.912297 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:39:38.912303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:39:38.912310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:39:38.912316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:39:38.912322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:39:38.912331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:39:38.912337 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:39:38.912344 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:39:38.912350 kernel: TSC deadline timer available May 8 00:39:38.912357 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 00:39:38.912364 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:39:38.912370 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:39:38.912376 kernel: kvm-guest: setup PV sched yield May 8 00:39:38.912383 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:39:38.912391 kernel: Booting paravirtualized kernel on KVM May 8 00:39:38.912398 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:39:38.912404 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 00:39:38.912411 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 00:39:38.912417 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 00:39:38.912423 kernel: pcpu-alloc: [0] 0 1 May 8 00:39:38.912429 kernel: kvm-guest: PV spinlocks enabled May 8 00:39:38.912436 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:39:38.912443 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:39:38.912452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:39:38.912459 kernel: random: crng init done May 8 00:39:38.912465 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:39:38.912472 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:39:38.912478 kernel: Fallback order for Node 0: 0 May 8 00:39:38.912484 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 8 00:39:38.912491 kernel: Policy zone: Normal May 8 00:39:38.912497 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:39:38.912505 kernel: software IO TLB: area num 2. May 8 00:39:38.912512 kernel: Memory: 3964164K/4193772K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 229348K reserved, 0K cma-reserved) May 8 00:39:38.912518 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 00:39:38.912525 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:39:38.912531 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:39:38.912537 kernel: Dynamic Preempt: voluntary May 8 00:39:38.912544 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:39:38.912555 kernel: rcu: RCU event tracing is enabled. May 8 00:39:38.912562 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 00:39:38.912571 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:39:38.912578 kernel: Rude variant of Tasks RCU enabled. May 8 00:39:38.912584 kernel: Tracing variant of Tasks RCU enabled. May 8 00:39:38.912591 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:39:38.912597 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 00:39:38.912603 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 00:39:38.912609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:39:38.912616 kernel: Console: colour VGA+ 80x25 May 8 00:39:38.912622 kernel: printk: console [tty0] enabled May 8 00:39:38.912630 kernel: printk: console [ttyS0] enabled May 8 00:39:38.912637 kernel: ACPI: Core revision 20230628 May 8 00:39:38.912643 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:39:38.912650 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:39:38.912663 kernel: x2apic enabled May 8 00:39:38.912672 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:39:38.912679 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:39:38.912686 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:39:38.912692 kernel: kvm-guest: setup PV IPIs May 8 00:39:38.912699 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:39:38.912705 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:39:38.912712 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) May 8 00:39:38.912721 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:39:38.912728 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:39:38.912734 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:39:38.912741 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:39:38.912748 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:39:38.912757 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:39:38.912763 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:39:38.912770 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 8 00:39:38.912777 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:39:38.912784 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:39:38.912790 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:39:38.912797 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:39:38.912804 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:39:38.912813 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:39:38.912820 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:39:38.912827 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:39:38.912833 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 8 00:39:38.912840 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:39:38.912847 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 8 00:39:38.912853 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 8 00:39:38.912860 kernel: Freeing SMP alternatives memory: 32K May 8 00:39:38.912867 kernel: pid_max: default: 32768 minimum: 301 May 8 00:39:38.912875 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:39:38.912882 kernel: landlock: Up and running. May 8 00:39:38.912889 kernel: SELinux: Initializing. May 8 00:39:38.912895 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:38.912902 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:38.912909 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 8 00:39:38.912916 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:39:38.912922 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:39:38.912929 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:39:38.912938 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:39:38.912945 kernel: ... version: 0 May 8 00:39:38.912951 kernel: ... bit width: 48 May 8 00:39:38.912958 kernel: ... generic registers: 6 May 8 00:39:38.912964 kernel: ... value mask: 0000ffffffffffff May 8 00:39:38.912971 kernel: ... max period: 00007fffffffffff May 8 00:39:38.912978 kernel: ... fixed-purpose events: 0 May 8 00:39:38.912984 kernel: ... event mask: 000000000000003f May 8 00:39:38.912991 kernel: signal: max sigframe size: 3376 May 8 00:39:38.913000 kernel: rcu: Hierarchical SRCU implementation. May 8 00:39:38.913007 kernel: rcu: Max phase no-delay instances is 400. May 8 00:39:38.913014 kernel: smp: Bringing up secondary CPUs ... May 8 00:39:38.913020 kernel: smpboot: x86: Booting SMP configuration: May 8 00:39:38.913057 kernel: .... node #0, CPUs: #1 May 8 00:39:38.913065 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:39:38.913072 kernel: smpboot: Max logical packages: 1 May 8 00:39:38.913079 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 8 00:39:38.913086 kernel: devtmpfs: initialized May 8 00:39:38.913095 kernel: x86/mm: Memory block size: 128MB May 8 00:39:38.913102 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:39:38.913109 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 00:39:38.913116 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:39:38.913122 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:39:38.913129 kernel: audit: initializing netlink subsys (disabled) May 8 00:39:38.913136 kernel: audit: type=2000 audit(1746664778.416:1): state=initialized audit_enabled=0 res=1 May 8 00:39:38.913142 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:39:38.913149 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:39:38.913158 kernel: cpuidle: using governor menu May 8 00:39:38.913165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:39:38.913171 kernel: dca service started, version 1.12.1 May 8 00:39:38.913178 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:39:38.913185 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:39:38.913191 kernel: PCI: Using configuration type 1 for base access May 8 00:39:38.913198 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:39:38.913205 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:39:38.913212 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:39:38.913220 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:39:38.913227 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:39:38.913234 kernel: ACPI: Added _OSI(Module Device) May 8 00:39:38.913240 kernel: ACPI: Added _OSI(Processor Device) May 8 00:39:38.913247 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:39:38.913254 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:39:38.913260 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:39:38.913267 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:39:38.913273 kernel: ACPI: Interpreter enabled May 8 00:39:38.913282 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:39:38.913289 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:39:38.913296 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:39:38.913302 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:39:38.913309 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:39:38.913315 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:39:38.913487 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:39:38.913612 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:39:38.913733 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:39:38.913743 kernel: PCI host bridge to bus 0000:00 May 8 00:39:38.913862 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:39:38.913969 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:39:38.916901 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:39:38.917019 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 8 00:39:38.917163 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:39:38.917274 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 8 00:39:38.917378 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:39:38.917515 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:39:38.917642 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:39:38.917757 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:39:38.917871 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:39:38.917994 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:39:38.918147 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:39:38.918281 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 8 00:39:38.918398 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 8 00:39:38.918511 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:39:38.918624 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:39:38.918755 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 00:39:38.918877 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 8 00:39:38.918990 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:39:38.921290 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:39:38.921412 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:39:38.921534 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:39:38.921647 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:39:38.921767 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:39:38.921886 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 8 00:39:38.921996 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 8 00:39:38.922149 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:39:38.922264 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:39:38.922274 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:39:38.922281 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:39:38.922288 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:39:38.922298 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:39:38.922305 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:39:38.922312 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:39:38.922318 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:39:38.922325 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:39:38.922331 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:39:38.922338 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:39:38.922344 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:39:38.922351 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:39:38.922360 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:39:38.922367 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:39:38.922373 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:39:38.922380 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:39:38.922387 kernel: iommu: Default domain type: Translated May 8 00:39:38.922393 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:39:38.922400 kernel: PCI: Using ACPI for IRQ routing May 8 00:39:38.922407 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:39:38.922413 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 8 00:39:38.922422 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 8 00:39:38.922533 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:39:38.922645 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:39:38.922757 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:39:38.922766 kernel: vgaarb: loaded May 8 00:39:38.922773 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:39:38.922780 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:39:38.922786 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:39:38.922796 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:39:38.922803 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:39:38.922810 kernel: pnp: PnP ACPI init May 8 00:39:38.922931 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:39:38.922941 kernel: pnp: PnP ACPI: found 5 devices May 8 00:39:38.922948 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:39:38.922955 kernel: NET: Registered PF_INET protocol family May 8 00:39:38.922962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:39:38.922972 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:39:38.922978 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:39:38.922985 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:39:38.922992 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:39:38.922998 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:39:38.923005 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:38.923012 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:38.923018 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:39:38.923025 kernel: NET: Registered PF_XDP protocol family May 8 00:39:38.923177 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:39:38.923279 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:39:38.923380 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:39:38.923484 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 8 00:39:38.923586 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:39:38.923726 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 8 00:39:38.923743 kernel: PCI: CLS 0 bytes, default 64 May 8 00:39:38.923754 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 00:39:38.923764 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 8 00:39:38.923771 kernel: Initialise system trusted keyrings May 8 00:39:38.923778 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:39:38.923785 kernel: Key type asymmetric registered May 8 00:39:38.923791 kernel: Asymmetric key parser 'x509' registered May 8 00:39:38.923798 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:39:38.923805 kernel: io scheduler mq-deadline registered May 8 00:39:38.923811 kernel: io scheduler kyber registered May 8 00:39:38.923818 kernel: io scheduler bfq registered May 8 00:39:38.923825 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:39:38.923834 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:39:38.923841 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:39:38.923848 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:39:38.923854 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:39:38.923861 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:39:38.923868 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:39:38.923874 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:39:38.923881 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:39:38.924009 kernel: rtc_cmos 00:03: RTC can wake from S4 May 8 00:39:38.924231 kernel: rtc_cmos 00:03: registered as rtc0 May 8 00:39:38.924344 kernel: rtc_cmos 00:03: setting system clock to 2025-05-08T00:39:38 UTC (1746664778) May 8 00:39:38.924450 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:39:38.924460 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:39:38.924466 kernel: NET: Registered PF_INET6 protocol family May 8 00:39:38.924473 kernel: Segment Routing with IPv6 May 8 00:39:38.924480 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:39:38.924491 kernel: NET: Registered PF_PACKET protocol family May 8 00:39:38.924497 kernel: Key type dns_resolver registered May 8 00:39:38.924504 kernel: IPI shorthand broadcast: enabled May 8 00:39:38.924511 kernel: sched_clock: Marking stable (672004506, 201744598)->(917015333, -43266229) May 8 00:39:38.924518 kernel: registered taskstats version 1 May 8 00:39:38.924524 kernel: Loading compiled-in X.509 certificates May 8 00:39:38.924531 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:39:38.924538 kernel: Key type .fscrypt registered May 8 00:39:38.924544 kernel: Key type fscrypt-provisioning registered May 8 00:39:38.924553 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:39:38.924560 kernel: ima: Allocated hash algorithm: sha1 May 8 00:39:38.924566 kernel: ima: No architecture policies found May 8 00:39:38.924573 kernel: clk: Disabling unused clocks May 8 00:39:38.924580 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:39:38.924586 kernel: Write protecting the kernel read-only data: 38912k May 8 00:39:38.924593 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:39:38.924600 kernel: Run /init as init process May 8 00:39:38.924606 kernel: with arguments: May 8 00:39:38.924615 kernel: /init May 8 00:39:38.924621 kernel: with environment: May 8 00:39:38.924628 kernel: HOME=/ May 8 00:39:38.924634 kernel: TERM=linux May 8 00:39:38.924641 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:39:38.924648 systemd[1]: Successfully made /usr/ read-only. May 8 00:39:38.924658 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:39:38.924666 systemd[1]: Detected virtualization kvm. May 8 00:39:38.924675 systemd[1]: Detected architecture x86-64. May 8 00:39:38.924682 systemd[1]: Running in initrd. May 8 00:39:38.924689 systemd[1]: No hostname configured, using default hostname. May 8 00:39:38.924697 systemd[1]: Hostname set to . May 8 00:39:38.924704 systemd[1]: Initializing machine ID from random generator. May 8 00:39:38.924724 systemd[1]: Queued start job for default target initrd.target. May 8 00:39:38.924736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:38.924744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:38.924752 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:39:38.924760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:38.924768 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:39:38.924776 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:39:38.924784 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:39:38.924794 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:39:38.924802 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:38.924809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:38.924816 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:38.924824 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:38.924831 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:38.924839 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:38.924846 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:38.924856 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:38.924863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:39:38.924870 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:39:38.924878 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:38.924885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:38.924893 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:38.924900 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:38.924908 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:39:38.924915 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:38.924925 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:39:38.924932 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:39:38.924940 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:38.924947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:38.924974 systemd-journald[178]: Collecting audit messages is disabled. May 8 00:39:38.924995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:38.925005 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:39:38.925013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:38.925023 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:39:38.925067 systemd-journald[178]: Journal started May 8 00:39:38.925087 systemd-journald[178]: Runtime Journal (/run/log/journal/85272c2c9db84e34883c9d07f4af1ee6) is 8M, max 78.3M, 70.3M free. May 8 00:39:38.928912 systemd-modules-load[180]: Inserted module 'overlay' May 8 00:39:38.976136 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:39:38.976159 kernel: Bridge firewalling registered May 8 00:39:38.976169 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:38.952989 systemd-modules-load[180]: Inserted module 'br_netfilter' May 8 00:39:38.976855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:38.977782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:38.984149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:38.987147 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:38.988224 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:39.000137 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:39.001270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:39.009762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:39.012098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:39.013479 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:39.022189 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:39:39.025472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:39.028215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:39.036351 dracut-cmdline[209]: dracut-dracut-053 May 8 00:39:39.039443 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:39:39.049257 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:39.068770 systemd-resolved[212]: Positive Trust Anchors: May 8 00:39:39.068783 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:39.068809 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:39.074601 systemd-resolved[212]: Defaulting to hostname 'linux'. May 8 00:39:39.075657 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:39.076493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:39.116053 kernel: SCSI subsystem initialized May 8 00:39:39.124049 kernel: Loading iSCSI transport class v2.0-870. May 8 00:39:39.135057 kernel: iscsi: registered transport (tcp) May 8 00:39:39.154786 kernel: iscsi: registered transport (qla4xxx) May 8 00:39:39.154823 kernel: QLogic iSCSI HBA Driver May 8 00:39:39.199544 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:39:39.206165 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:39:39.228062 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:39:39.228102 kernel: device-mapper: uevent: version 1.0.3 May 8 00:39:39.231373 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:39:39.272056 kernel: raid6: avx2x4 gen() 38317 MB/s May 8 00:39:39.290052 kernel: raid6: avx2x2 gen() 33130 MB/s May 8 00:39:39.308556 kernel: raid6: avx2x1 gen() 22151 MB/s May 8 00:39:39.308571 kernel: raid6: using algorithm avx2x4 gen() 38317 MB/s May 8 00:39:39.327351 kernel: raid6: .... xor() 4583 MB/s, rmw enabled May 8 00:39:39.327379 kernel: raid6: using avx2x2 recovery algorithm May 8 00:39:39.347084 kernel: xor: automatically using best checksumming function avx May 8 00:39:39.469062 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:39:39.480540 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:39.486212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:39.500436 systemd-udevd[398]: Using default interface naming scheme 'v255'. May 8 00:39:39.505250 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:39.513267 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:39:39.526775 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 8 00:39:39.555363 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:39.562163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:39.616390 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:39.625181 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:39:39.638421 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:39:39.639498 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:39.640043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:39.641944 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:39.649188 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:39:39.661140 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:39.751047 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:39:39.788050 kernel: scsi host0: Virtio SCSI HBA May 8 00:39:39.788093 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:39:39.790529 kernel: AES CTR mode by8 optimization enabled May 8 00:39:39.812111 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 8 00:39:39.817097 kernel: libata version 3.00 loaded. May 8 00:39:39.823246 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:39.823382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:39.824161 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:39.824863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:39.825624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:39.827179 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:39.833953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:39.854132 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:39:39.861500 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:39:39.861516 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:39:39.861671 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:39:39.861811 kernel: scsi host1: ahci May 8 00:39:39.861969 kernel: scsi host2: ahci May 8 00:39:39.862152 kernel: scsi host3: ahci May 8 00:39:39.862312 kernel: scsi host4: ahci May 8 00:39:39.862459 kernel: scsi host5: ahci May 8 00:39:39.862603 kernel: scsi host6: ahci May 8 00:39:39.862741 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 8 00:39:39.862752 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 8 00:39:39.862765 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 8 00:39:39.862774 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 8 00:39:39.862783 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 8 00:39:39.862793 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 8 00:39:39.925741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:39.933181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:39.952440 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:40.180841 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:39:40.180874 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:39:40.180886 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:39:40.180895 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:39:40.180905 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 8 00:39:40.181050 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:39:40.195127 kernel: sd 0:0:0:0: Power-on or device reset occurred May 8 00:39:40.217047 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 8 00:39:40.217192 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:39:40.217313 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 8 00:39:40.217442 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 8 00:39:40.217601 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:39:40.217618 kernel: GPT:9289727 != 167739391 May 8 00:39:40.217627 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:39:40.217637 kernel: GPT:9289727 != 167739391 May 8 00:39:40.217646 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:39:40.217656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:40.217665 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:39:40.253072 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (467) May 8 00:39:40.260049 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (444) May 8 00:39:40.266647 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 8 00:39:40.281009 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 8 00:39:40.289855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:39:40.296737 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 8 00:39:40.297356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 8 00:39:40.308151 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:39:40.312728 disk-uuid[570]: Primary Header is updated. May 8 00:39:40.312728 disk-uuid[570]: Secondary Entries is updated. May 8 00:39:40.312728 disk-uuid[570]: Secondary Header is updated. May 8 00:39:40.317054 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:40.322051 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:41.326295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:41.328369 disk-uuid[571]: The operation has completed successfully. May 8 00:39:41.385817 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:39:41.385934 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:39:41.413272 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:39:41.416825 sh[585]: Success May 8 00:39:41.432210 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:39:41.489007 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:39:41.490424 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:39:41.492845 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:39:41.515460 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:39:41.515494 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:41.515506 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:39:41.517631 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:39:41.519408 kernel: BTRFS info (device dm-0): using free space tree May 8 00:39:41.529160 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:39:41.530775 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:39:41.531793 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:39:41.538144 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:39:41.542154 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:39:41.560343 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:41.560375 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:41.560386 kernel: BTRFS info (device sda6): using free space tree May 8 00:39:41.566512 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:39:41.566539 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:39:41.573221 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:41.574285 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:39:41.584167 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:39:41.646461 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:41.657202 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:41.666269 ignition[684]: Ignition 2.20.0 May 8 00:39:41.666284 ignition[684]: Stage: fetch-offline May 8 00:39:41.666315 ignition[684]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:41.666325 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:41.666422 ignition[684]: parsed url from cmdline: "" May 8 00:39:41.666426 ignition[684]: no config URL provided May 8 00:39:41.666431 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:39:41.670287 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:41.666440 ignition[684]: no config at "/usr/lib/ignition/user.ign" May 8 00:39:41.666445 ignition[684]: failed to fetch config: resource requires networking May 8 00:39:41.666585 ignition[684]: Ignition finished successfully May 8 00:39:41.686653 systemd-networkd[768]: lo: Link UP May 8 00:39:41.686664 systemd-networkd[768]: lo: Gained carrier May 8 00:39:41.688796 systemd-networkd[768]: Enumeration completed May 8 00:39:41.688884 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:41.689850 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:41.689854 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:41.691440 systemd[1]: Reached target network.target - Network. May 8 00:39:41.692042 systemd-networkd[768]: eth0: Link UP May 8 00:39:41.692046 systemd-networkd[768]: eth0: Gained carrier May 8 00:39:41.692053 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:41.698187 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 00:39:41.710166 ignition[773]: Ignition 2.20.0 May 8 00:39:41.710183 ignition[773]: Stage: fetch May 8 00:39:41.710314 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:41.710325 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:41.710396 ignition[773]: parsed url from cmdline: "" May 8 00:39:41.710400 ignition[773]: no config URL provided May 8 00:39:41.710405 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:39:41.710414 ignition[773]: no config at "/usr/lib/ignition/user.ign" May 8 00:39:41.710434 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 May 8 00:39:41.710596 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:39:41.911555 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 May 8 00:39:41.911774 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:39:42.172127 systemd-networkd[768]: eth0: DHCPv4 address 172.237.145.97/24, gateway 172.237.145.1 acquired from 23.213.14.74 May 8 00:39:42.311950 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 May 8 00:39:42.404313 ignition[773]: PUT result: OK May 8 00:39:42.404384 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 May 8 00:39:42.513573 ignition[773]: GET result: OK May 8 00:39:42.513692 ignition[773]: parsing config with SHA512: 8b107e50b5257a5589187cec2de9fe63157338fed5e66de18d980c51867ef8e18230fe2ad083b00d9eb88e494a99520c7d2419407c3f07da0a734d2b19ad6fdc May 8 00:39:42.517772 unknown[773]: fetched base config from "system" May 8 00:39:42.517784 unknown[773]: fetched base config from "system" May 8 00:39:42.518321 ignition[773]: fetch: fetch complete May 8 00:39:42.517791 unknown[773]: fetched user config from "akamai" May 8 00:39:42.518326 ignition[773]: fetch: fetch passed May 8 00:39:42.518381 ignition[773]: Ignition finished successfully May 8 00:39:42.521667 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 00:39:42.528182 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:39:42.541355 ignition[780]: Ignition 2.20.0 May 8 00:39:42.541366 ignition[780]: Stage: kargs May 8 00:39:42.541514 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:42.541525 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:42.543639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:39:42.542294 ignition[780]: kargs: kargs passed May 8 00:39:42.542334 ignition[780]: Ignition finished successfully May 8 00:39:42.551172 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:39:42.561180 ignition[787]: Ignition 2.20.0 May 8 00:39:42.561713 ignition[787]: Stage: disks May 8 00:39:42.561859 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:42.561871 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:42.562728 ignition[787]: disks: disks passed May 8 00:39:42.564124 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:39:42.562777 ignition[787]: Ignition finished successfully May 8 00:39:42.569817 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:39:42.570852 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:39:42.571995 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:42.573222 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:42.574208 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:42.582178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:39:42.598921 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:39:42.601565 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:39:42.606257 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:39:42.693071 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:39:42.693350 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:39:42.694515 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:39:42.702099 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:42.704548 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:39:42.706803 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:39:42.706850 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:39:42.706924 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:42.710644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:39:42.713297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:39:42.722052 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (803) May 8 00:39:42.722085 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:42.725538 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:42.725560 kernel: BTRFS info (device sda6): using free space tree May 8 00:39:42.730525 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:39:42.730549 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:39:42.732912 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:42.768908 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:39:42.774994 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory May 8 00:39:42.778102 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:39:42.782997 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:39:42.865792 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:39:42.878153 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:39:42.881999 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:39:42.887261 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:39:42.890075 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:42.908053 ignition[915]: INFO : Ignition 2.20.0 May 8 00:39:42.908053 ignition[915]: INFO : Stage: mount May 8 00:39:42.908053 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:42.908053 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:42.913208 ignition[915]: INFO : mount: mount passed May 8 00:39:42.913208 ignition[915]: INFO : Ignition finished successfully May 8 00:39:42.910335 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:39:42.917155 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:39:42.919406 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:39:43.239292 systemd-networkd[768]: eth0: Gained IPv6LL May 8 00:39:43.699490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:43.714444 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (928) May 8 00:39:43.714483 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:43.717922 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:43.717938 kernel: BTRFS info (device sda6): using free space tree May 8 00:39:43.724600 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:39:43.724625 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:39:43.727097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:43.743951 ignition[944]: INFO : Ignition 2.20.0 May 8 00:39:43.744687 ignition[944]: INFO : Stage: files May 8 00:39:43.744687 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:43.744687 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:43.746648 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 8 00:39:43.746648 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:39:43.746648 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:39:43.748953 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:39:43.748953 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:39:43.750624 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:39:43.749398 unknown[944]: wrote ssh authorized keys file for user: core May 8 00:39:43.752024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:39:43.752024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:39:43.778540 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:39:44.163506 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:39:44.163506 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:44.165762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:39:44.398976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:39:44.693469 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:44.693469 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:39:44.695630 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:44.695630 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:44.695630 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:39:44.695630 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:39:44.695630 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:39:44.695630 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:39:44.695630 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:39:44.695630 ignition[944]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 8 00:39:44.695630 ignition[944]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:39:44.695630 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:44.695630 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:44.695630 ignition[944]: INFO : files: files passed May 8 00:39:44.695630 ignition[944]: INFO : Ignition finished successfully May 8 00:39:44.698340 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:39:44.711179 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:39:44.715209 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:39:44.717273 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:39:44.718153 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:39:44.730192 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:44.730192 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:44.733238 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:44.734595 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:44.736272 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:39:44.742188 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:39:44.783623 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:39:44.784407 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:39:44.785184 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:39:44.786725 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:39:44.787853 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:39:44.793360 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:39:44.805174 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:44.811178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:39:44.821179 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:44.821810 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:44.823005 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:39:44.824164 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:39:44.824267 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:44.825486 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:39:44.826198 systemd[1]: Stopped target basic.target - Basic System. May 8 00:39:44.827317 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:39:44.828329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:44.829342 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:39:44.830534 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:39:44.831694 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:44.832899 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:39:44.834082 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:39:44.835222 systemd[1]: Stopped target swap.target - Swaps. May 8 00:39:44.836276 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:39:44.836380 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:44.837598 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:44.838322 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:44.839316 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:39:44.839679 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:44.840707 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:39:44.840820 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:39:44.842307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:39:44.842429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:44.843219 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:39:44.843361 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:39:44.853494 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:39:44.856413 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:39:44.856955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:39:44.857126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:44.859404 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:39:44.859546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:44.869239 ignition[998]: INFO : Ignition 2.20.0 May 8 00:39:44.869239 ignition[998]: INFO : Stage: umount May 8 00:39:44.871310 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:44.871310 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:44.871310 ignition[998]: INFO : umount: umount passed May 8 00:39:44.871310 ignition[998]: INFO : Ignition finished successfully May 8 00:39:44.873712 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:39:44.873859 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:39:44.875587 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:39:44.875686 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:39:44.879259 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:39:44.879313 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:39:44.880686 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:39:44.880736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:39:44.882947 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 00:39:44.883203 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 00:39:44.884431 systemd[1]: Stopped target network.target - Network. May 8 00:39:44.886153 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:39:44.886250 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:44.888795 systemd[1]: Stopped target paths.target - Path Units. May 8 00:39:44.889289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:39:44.889360 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:44.890692 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:39:44.891193 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:39:44.896379 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:39:44.896424 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:44.897370 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:39:44.897411 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:44.898364 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:39:44.898413 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:39:44.899100 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:39:44.899148 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:39:44.899798 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:39:44.900903 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:39:44.907116 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:39:44.907684 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:39:44.907794 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:39:44.913559 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:39:44.916796 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:39:44.917924 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:39:44.920549 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:39:44.923297 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:39:44.923349 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:44.932745 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:39:44.934582 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:39:44.934686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:44.937929 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:39:44.937994 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:44.939998 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:39:44.940135 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:39:44.940756 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:39:44.940805 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:44.942636 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:44.945448 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:39:44.945518 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:39:44.945938 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:39:44.948451 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:39:44.956022 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:39:44.956155 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:39:44.959785 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:39:44.959910 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:39:44.969792 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:39:44.969967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:44.971219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:39:44.971268 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:39:44.972280 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:39:44.972320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:44.973398 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:39:44.973448 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:44.975044 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:39:44.975096 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:39:44.976164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:44.976212 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:44.984151 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:39:44.985356 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:39:44.985413 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:44.988255 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:39:44.988307 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:44.988938 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:39:44.988986 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:44.989551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:44.989600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:44.991890 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:39:44.991951 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:39:44.992309 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:39:44.992406 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:39:44.993956 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:39:45.002213 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:39:45.009341 systemd[1]: Switching root. May 8 00:39:45.041650 systemd-journald[178]: Journal stopped May 8 00:39:46.053871 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 8 00:39:46.053894 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:39:46.053906 kernel: SELinux: policy capability open_perms=1 May 8 00:39:46.053915 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:39:46.053924 kernel: SELinux: policy capability always_check_network=0 May 8 00:39:46.053935 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:39:46.053945 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:39:46.053954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:39:46.053962 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:39:46.053971 kernel: audit: type=1403 audit(1746664785.147:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:39:46.053981 systemd[1]: Successfully loaded SELinux policy in 44.630ms. May 8 00:39:46.053993 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.075ms. May 8 00:39:46.054004 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:39:46.054015 systemd[1]: Detected virtualization kvm. May 8 00:39:46.054047 systemd[1]: Detected architecture x86-64. May 8 00:39:46.054067 systemd[1]: Detected first boot. May 8 00:39:46.054086 systemd[1]: Initializing machine ID from random generator. May 8 00:39:46.054096 kernel: Guest personality initialized and is inactive May 8 00:39:46.054106 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:39:46.054115 kernel: Initialized host personality May 8 00:39:46.054124 zram_generator::config[1044]: No configuration found. May 8 00:39:46.054135 kernel: NET: Registered PF_VSOCK protocol family May 8 00:39:46.054144 systemd[1]: Populated /etc with preset unit settings. May 8 00:39:46.054157 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:39:46.054167 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:39:46.054176 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:39:46.054186 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:39:46.054196 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:39:46.054205 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:39:46.054217 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:39:46.054229 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:39:46.054239 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:39:46.054249 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:39:46.054259 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:39:46.054268 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:39:46.054278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:46.054288 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:46.054297 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:39:46.054307 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:39:46.054319 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:39:46.054333 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:46.054343 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:39:46.054353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:46.054363 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:39:46.054373 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:39:46.054383 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:39:46.054395 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:39:46.054405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:46.054415 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:46.054425 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:46.054435 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:46.054445 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:39:46.054455 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:39:46.054465 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:39:46.054475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:46.054487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:46.054497 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:46.054507 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:39:46.054517 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:39:46.054529 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:39:46.054539 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:39:46.054549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:46.054559 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:39:46.054569 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:39:46.054578 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:39:46.054589 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:39:46.054599 systemd[1]: Reached target machines.target - Containers. May 8 00:39:46.054611 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:39:46.054621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:46.054631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:46.054641 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:39:46.054652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:46.054661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:46.054671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:46.054681 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:39:46.054691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:46.054703 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:39:46.054713 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:39:46.054723 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:39:46.054733 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:39:46.054743 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:39:46.054753 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:39:46.054763 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:46.054773 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:46.054785 kernel: loop: module loaded May 8 00:39:46.054795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:39:46.054805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:39:46.054815 kernel: ACPI: bus type drm_connector registered May 8 00:39:46.054825 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:39:46.054835 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:46.054845 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:39:46.054854 systemd[1]: Stopped verity-setup.service. May 8 00:39:46.054868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:46.054877 kernel: fuse: init (API version 7.39) May 8 00:39:46.054887 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:39:46.054897 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:39:46.054907 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:39:46.054935 systemd-journald[1132]: Collecting audit messages is disabled. May 8 00:39:46.054961 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:39:46.054972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:39:46.054982 systemd-journald[1132]: Journal started May 8 00:39:46.055002 systemd-journald[1132]: Runtime Journal (/run/log/journal/4d25ada2096b4fe3b406874cb9ff9a2e) is 8M, max 78.3M, 70.3M free. May 8 00:39:46.057062 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:39:45.743331 systemd[1]: Queued start job for default target multi-user.target. May 8 00:39:45.751700 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:39:45.752196 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:39:46.058368 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:46.060772 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:39:46.061735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:46.062659 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:39:46.062908 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:39:46.063937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:46.064223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:46.065197 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:46.065445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:46.066352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:46.066597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:46.067609 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:39:46.067797 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:39:46.068737 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:46.068985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:46.070021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:46.070936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:39:46.071978 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:39:46.072918 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:39:46.087817 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:39:46.095113 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:39:46.099080 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:39:46.100645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:39:46.100674 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:46.102448 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:39:46.113535 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:39:46.116621 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:39:46.117357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:46.121185 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:39:46.124175 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:39:46.126304 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:46.127522 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:39:46.128144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:46.131342 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:46.133213 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:39:46.139376 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:46.141945 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:39:46.152198 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:39:46.153828 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:39:46.163483 systemd-journald[1132]: Time spent on flushing to /var/log/journal/4d25ada2096b4fe3b406874cb9ff9a2e is 44.680ms for 990 entries. May 8 00:39:46.163483 systemd-journald[1132]: System Journal (/var/log/journal/4d25ada2096b4fe3b406874cb9ff9a2e) is 8M, max 195.6M, 187.6M free. May 8 00:39:46.228543 systemd-journald[1132]: Received client request to flush runtime journal. May 8 00:39:46.228954 kernel: loop0: detected capacity change from 0 to 147912 May 8 00:39:46.171683 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:39:46.173776 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:46.175827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:39:46.186195 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:39:46.197001 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:39:46.226051 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:39:46.239051 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:39:46.241379 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:39:46.248354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:39:46.250900 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 8 00:39:46.250919 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 8 00:39:46.253826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:46.260402 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:46.267513 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:39:46.272049 kernel: loop1: detected capacity change from 0 to 210664 May 8 00:39:46.310260 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:39:46.315470 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:39:46.318970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:46.334983 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 8 00:39:46.335069 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 8 00:39:46.345025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:46.372784 kernel: loop3: detected capacity change from 0 to 8 May 8 00:39:46.394512 kernel: loop4: detected capacity change from 0 to 147912 May 8 00:39:46.420121 kernel: loop5: detected capacity change from 0 to 210664 May 8 00:39:46.442071 kernel: loop6: detected capacity change from 0 to 138176 May 8 00:39:46.471247 kernel: loop7: detected capacity change from 0 to 8 May 8 00:39:46.472904 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 8 00:39:46.473752 (sd-merge)[1198]: Merged extensions into '/usr'. May 8 00:39:46.481470 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:39:46.481563 systemd[1]: Reloading... May 8 00:39:46.581115 zram_generator::config[1228]: No configuration found. May 8 00:39:46.688092 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:39:46.726002 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:46.784580 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:39:46.785057 systemd[1]: Reloading finished in 302 ms. May 8 00:39:46.813373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:39:46.814782 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:39:46.815819 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:39:46.831369 systemd[1]: Starting ensure-sysext.service... May 8 00:39:46.835162 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:46.838621 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:46.853304 systemd[1]: Reload requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... May 8 00:39:46.853320 systemd[1]: Reloading... May 8 00:39:46.856918 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:39:46.857182 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:39:46.857985 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:39:46.858319 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 8 00:39:46.858443 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 8 00:39:46.862609 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:46.862678 systemd-tmpfiles[1271]: Skipping /boot May 8 00:39:46.875619 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:46.875678 systemd-tmpfiles[1271]: Skipping /boot May 8 00:39:46.906007 systemd-udevd[1272]: Using default interface naming scheme 'v255'. May 8 00:39:46.941052 zram_generator::config[1303]: No configuration found. May 8 00:39:47.068449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:47.098086 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:39:47.126093 kernel: ACPI: button: Power Button [PWRF] May 8 00:39:47.146127 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:39:47.146518 systemd[1]: Reloading finished in 292 ms. May 8 00:39:47.155344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:47.156719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:47.172077 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:39:47.172336 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:39:47.172518 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:39:47.183156 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:39:47.197099 kernel: EDAC MC: Ver: 3.0.0 May 8 00:39:47.198153 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:39:47.210300 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:39:47.214515 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:39:47.225153 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:47.235294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:47.244834 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:39:47.255078 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1327) May 8 00:39:47.274813 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:39:47.308461 systemd[1]: Finished ensure-sysext.service. May 8 00:39:47.309678 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:47.309947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:47.315452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:47.318439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:47.322228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:47.326181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:47.326934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:47.327023 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:39:47.334235 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:39:47.342227 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:39:47.346222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:47.346772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:47.348377 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:39:47.350632 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:39:47.352493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:47.352693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:47.360189 augenrules[1410]: No rules May 8 00:39:47.370701 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:39:47.371658 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:39:47.372001 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:39:47.378799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:39:47.381722 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:47.381943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:47.383126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:47.383346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:47.384305 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:47.384514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:47.389654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:39:47.420220 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:39:47.425152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:39:47.425852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:47.425914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:47.428220 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:39:47.428739 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:39:47.428951 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:39:47.446138 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:47.469361 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:39:47.521046 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:39:47.521983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:47.524337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:47.534240 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:39:47.539313 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:39:47.544084 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:47.586542 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:39:47.587105 systemd-networkd[1384]: lo: Link UP May 8 00:39:47.587109 systemd-networkd[1384]: lo: Gained carrier May 8 00:39:47.587654 systemd-resolved[1385]: Positive Trust Anchors: May 8 00:39:47.587662 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:47.587689 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:47.590388 systemd-networkd[1384]: Enumeration completed May 8 00:39:47.590623 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:47.591624 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:47.591639 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:47.593896 systemd-resolved[1385]: Defaulting to hostname 'linux'. May 8 00:39:47.594364 systemd-networkd[1384]: eth0: Link UP May 8 00:39:47.594378 systemd-networkd[1384]: eth0: Gained carrier May 8 00:39:47.594391 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:47.597199 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:39:47.605220 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:39:47.605982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:47.606621 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:39:47.607401 systemd[1]: Reached target network.target - Network. May 8 00:39:47.607896 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:47.610096 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:47.610722 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:39:47.611340 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:39:47.611908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:39:47.612473 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:39:47.612504 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:47.612981 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:39:47.613874 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:39:47.614717 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:39:47.615292 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:47.617093 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:39:47.619195 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:39:47.622506 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:39:47.623243 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:39:47.623806 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:39:47.626681 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:39:47.627582 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:39:47.628996 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:39:47.629805 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:39:47.631091 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:47.631627 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:47.632212 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:47.632252 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:47.637123 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:39:47.639190 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:39:47.642189 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:39:47.644267 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:39:47.648266 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:39:47.649507 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:39:47.652209 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:39:47.658598 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:39:47.666734 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:39:47.672508 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:39:47.676095 jq[1454]: false May 8 00:39:47.677208 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:39:47.680952 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:39:47.681525 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:39:47.684212 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:39:47.692145 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:39:47.698590 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:39:47.699143 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:39:47.723654 jq[1463]: true May 8 00:39:47.733410 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:39:47.733707 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:39:47.763439 tar[1467]: linux-amd64/helm May 8 00:39:47.763656 jq[1476]: true May 8 00:39:47.771374 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:39:47.778477 extend-filesystems[1455]: Found loop4 May 8 00:39:47.779618 extend-filesystems[1455]: Found loop5 May 8 00:39:47.779618 extend-filesystems[1455]: Found loop6 May 8 00:39:47.779618 extend-filesystems[1455]: Found loop7 May 8 00:39:47.779618 extend-filesystems[1455]: Found sda May 8 00:39:47.779618 extend-filesystems[1455]: Found sda1 May 8 00:39:47.779618 extend-filesystems[1455]: Found sda2 May 8 00:39:47.779618 extend-filesystems[1455]: Found sda3 May 8 00:39:47.779618 extend-filesystems[1455]: Found usr May 8 00:39:47.779618 extend-filesystems[1455]: Found sda4 May 8 00:39:47.779618 extend-filesystems[1455]: Found sda6 May 8 00:39:47.779618 extend-filesystems[1455]: Found sda7 May 8 00:39:47.779618 extend-filesystems[1455]: Found sda9 May 8 00:39:47.779618 extend-filesystems[1455]: Checking size of /dev/sda9 May 8 00:39:47.837617 extend-filesystems[1455]: Resized partition /dev/sda9 May 8 00:39:47.799541 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:39:47.840622 update_engine[1462]: I20250508 00:39:47.805639 1462 main.cc:92] Flatcar Update Engine starting May 8 00:39:47.812470 dbus-daemon[1453]: [system] SELinux support is enabled May 8 00:39:47.841420 coreos-metadata[1452]: May 08 00:39:47.838 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:39:47.800166 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:39:47.812923 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:39:47.831711 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:39:47.831734 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:39:47.834008 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:39:47.834025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:39:47.846741 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:39:47.848090 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) May 8 00:39:47.860612 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 8 00:39:47.860644 update_engine[1462]: I20250508 00:39:47.847446 1462 update_check_scheduler.cc:74] Next update check in 2m10s May 8 00:39:47.846773 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:39:47.847270 systemd[1]: Started update-engine.service - Update Engine. May 8 00:39:47.859041 systemd-logind[1461]: New seat seat0. May 8 00:39:47.868960 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:39:47.870503 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:39:47.902516 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:39:47.946584 bash[1510]: Updated "/home/core/.ssh/authorized_keys" May 8 00:39:47.957732 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:39:47.972563 systemd[1]: Starting sshkeys.service... May 8 00:39:47.974080 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1327) May 8 00:39:48.053848 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:39:48.062837 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:39:48.069524 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:39:48.078929 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:39:48.088735 systemd-networkd[1384]: eth0: DHCPv4 address 172.237.145.97/24, gateway 172.237.145.1 acquired from 23.213.14.74 May 8 00:39:48.089363 dbus-daemon[1453]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1384 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 8 00:39:48.093857 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. May 8 00:39:48.099264 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 8 00:39:48.103700 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:39:48.120952 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:39:48.121692 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:39:48.132699 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:39:48.824297 systemd-timesyncd[1404]: Contacted time server 149.248.12.167:123 (0.flatcar.pool.ntp.org). May 8 00:39:48.824357 systemd-timesyncd[1404]: Initial clock synchronization to Thu 2025-05-08 00:39:48.820870 UTC. May 8 00:39:48.824774 systemd-resolved[1385]: Clock change detected. Flushing caches. May 8 00:39:48.833631 containerd[1484]: time="2025-05-08T00:39:48.833483201Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:39:48.840828 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:39:48.852525 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:39:48.861495 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:39:48.863180 coreos-metadata[1531]: May 08 00:39:48.861 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:39:48.862298 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:39:48.870664 containerd[1484]: time="2025-05-08T00:39:48.870620663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.872490241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.872515601Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.872529951Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.872790491Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.872806111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.872866551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.872877891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.873085941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.873098861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.873110441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:48.874412 containerd[1484]: time="2025-05-08T00:39:48.873118111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:39:48.874620 containerd[1484]: time="2025-05-08T00:39:48.873225581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:48.874620 containerd[1484]: time="2025-05-08T00:39:48.873635390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:48.874620 containerd[1484]: time="2025-05-08T00:39:48.873782320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:48.874620 containerd[1484]: time="2025-05-08T00:39:48.873793980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:39:48.874620 containerd[1484]: time="2025-05-08T00:39:48.873894020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:39:48.874620 containerd[1484]: time="2025-05-08T00:39:48.873944800Z" level=info msg="metadata content store policy set" policy=shared May 8 00:39:48.887329 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 8 00:39:48.896235 containerd[1484]: time="2025-05-08T00:39:48.895951768Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:39:48.896235 containerd[1484]: time="2025-05-08T00:39:48.895999248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:39:48.896235 containerd[1484]: time="2025-05-08T00:39:48.896014718Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:39:48.896235 containerd[1484]: time="2025-05-08T00:39:48.896027528Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:39:48.896235 containerd[1484]: time="2025-05-08T00:39:48.896044708Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:39:48.896235 containerd[1484]: time="2025-05-08T00:39:48.896165538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:39:48.896400 containerd[1484]: time="2025-05-08T00:39:48.896371338Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:39:48.896515 containerd[1484]: time="2025-05-08T00:39:48.896486787Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:39:48.896569 containerd[1484]: time="2025-05-08T00:39:48.896520367Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:39:48.896569 containerd[1484]: time="2025-05-08T00:39:48.896533877Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:39:48.896569 containerd[1484]: time="2025-05-08T00:39:48.896545257Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896569 containerd[1484]: time="2025-05-08T00:39:48.896556247Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896569 containerd[1484]: time="2025-05-08T00:39:48.896566067Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896576737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896588437Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896598627Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896609067Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896617667Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896633897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896645677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896660677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896670897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896685847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896696537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896705347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896715107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:39:48.896803 containerd[1484]: time="2025-05-08T00:39:48.896727197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896739627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896748817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896758187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896767817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896778597Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896794817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896809107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896818597Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896851877Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896864157Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896872707Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896882697Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:39:48.897512 containerd[1484]: time="2025-05-08T00:39:48.896890627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:39:48.897715 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 8 00:39:48.897715 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 10 May 8 00:39:48.897715 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 8 00:39:48.905586 containerd[1484]: time="2025-05-08T00:39:48.896900087Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:39:48.905586 containerd[1484]: time="2025-05-08T00:39:48.896908467Z" level=info msg="NRI interface is disabled by configuration." May 8 00:39:48.905586 containerd[1484]: time="2025-05-08T00:39:48.896916617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:39:48.900564 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:39:48.905734 extend-filesystems[1455]: Resized filesystem in /dev/sda9 May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.897119277Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.897161647Z" level=info msg="Connect containerd service" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.897194167Z" level=info msg="using legacy CRI server" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.897200657Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.897321587Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.897810046Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.901470493Z" level=info msg="Start subscribing containerd event" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.901505042Z" level=info msg="Start recovering state" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.901555932Z" level=info msg="Start event monitor" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.901565842Z" level=info msg="Start snapshots syncer" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.901573742Z" level=info msg="Start cni network conf syncer for default" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.901580562Z" level=info msg="Start streaming server" May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.901785412Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:39:48.907576 containerd[1484]: time="2025-05-08T00:39:48.904318880Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:39:48.900863 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:39:48.909746 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:39:48.911318 containerd[1484]: time="2025-05-08T00:39:48.911138683Z" level=info msg="containerd successfully booted in 0.081692s" May 8 00:39:48.922567 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 8 00:39:48.923419 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.hostname1' May 8 00:39:48.924686 dbus-daemon[1453]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1533 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 8 00:39:48.938748 systemd[1]: Starting polkit.service - Authorization Manager... May 8 00:39:48.947728 polkitd[1551]: Started polkitd version 121 May 8 00:39:48.956160 polkitd[1551]: Loading rules from directory /etc/polkit-1/rules.d May 8 00:39:48.958150 polkitd[1551]: Loading rules from directory /usr/share/polkit-1/rules.d May 8 00:39:48.960039 polkitd[1551]: Finished loading, compiling and executing 2 rules May 8 00:39:48.960230 coreos-metadata[1531]: May 08 00:39:48.960 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 8 00:39:48.961348 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 8 00:39:48.961467 systemd[1]: Started polkit.service - Authorization Manager. May 8 00:39:48.962150 polkitd[1551]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 8 00:39:48.971636 systemd-hostnamed[1533]: Hostname set to <172-237-145-97> (transient) May 8 00:39:48.971668 systemd-resolved[1385]: System hostname changed to '172-237-145-97'. May 8 00:39:49.096511 coreos-metadata[1531]: May 08 00:39:49.096 INFO Fetch successful May 8 00:39:49.114302 update-ssh-keys[1562]: Updated "/home/core/.ssh/authorized_keys" May 8 00:39:49.115863 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:39:49.118854 systemd[1]: Finished sshkeys.service. May 8 00:39:49.208428 tar[1467]: linux-amd64/LICENSE May 8 00:39:49.208428 tar[1467]: linux-amd64/README.md May 8 00:39:49.229525 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:39:49.512765 coreos-metadata[1452]: May 08 00:39:49.512 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 8 00:39:49.600395 systemd-networkd[1384]: eth0: Gained IPv6LL May 8 00:39:49.603699 coreos-metadata[1452]: May 08 00:39:49.603 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 8 00:39:49.604435 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:39:49.605501 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:39:49.611450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:49.615501 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:39:49.640470 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:39:49.790933 coreos-metadata[1452]: May 08 00:39:49.790 INFO Fetch successful May 8 00:39:49.790933 coreos-metadata[1452]: May 08 00:39:49.790 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 8 00:39:50.141966 coreos-metadata[1452]: May 08 00:39:50.141 INFO Fetch successful May 8 00:39:50.215115 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:39:50.216888 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:39:50.345122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:50.346828 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:39:50.347936 systemd[1]: Startup finished in 798ms (kernel) + 6.453s (initrd) + 4.579s (userspace) = 11.831s. May 8 00:39:50.383628 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:50.910084 kubelet[1606]: E0508 00:39:50.910010 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:50.913894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:50.914128 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:50.914537 systemd[1]: kubelet.service: Consumed 808ms CPU time, 245.5M memory peak. May 8 00:39:53.129911 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:39:53.136659 systemd[1]: Started sshd@0-172.237.145.97:22-139.178.89.65:44752.service - OpenSSH per-connection server daemon (139.178.89.65:44752). May 8 00:39:53.484969 sshd[1619]: Accepted publickey for core from 139.178.89.65 port 44752 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:53.487319 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:53.497349 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:39:53.503449 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:39:53.512173 systemd-logind[1461]: New session 1 of user core. May 8 00:39:53.523781 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:39:53.536470 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:39:53.539609 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:53.543152 systemd-logind[1461]: New session c1 of user core. May 8 00:39:53.684673 systemd[1623]: Queued start job for default target default.target. May 8 00:39:53.692571 systemd[1623]: Created slice app.slice - User Application Slice. May 8 00:39:53.692601 systemd[1623]: Reached target paths.target - Paths. May 8 00:39:53.692649 systemd[1623]: Reached target timers.target - Timers. May 8 00:39:53.696669 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:39:53.710140 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:39:53.710439 systemd[1623]: Reached target sockets.target - Sockets. May 8 00:39:53.710763 systemd[1623]: Reached target basic.target - Basic System. May 8 00:39:53.710883 systemd[1623]: Reached target default.target - Main User Target. May 8 00:39:53.710972 systemd[1623]: Startup finished in 160ms. May 8 00:39:53.711103 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:39:53.717383 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:39:53.981445 systemd[1]: Started sshd@1-172.237.145.97:22-139.178.89.65:44762.service - OpenSSH per-connection server daemon (139.178.89.65:44762). May 8 00:39:54.301610 sshd[1634]: Accepted publickey for core from 139.178.89.65 port 44762 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:54.303191 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:54.309010 systemd-logind[1461]: New session 2 of user core. May 8 00:39:54.315341 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:39:54.544406 sshd[1636]: Connection closed by 139.178.89.65 port 44762 May 8 00:39:54.545227 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 8 00:39:54.549142 systemd[1]: sshd@1-172.237.145.97:22-139.178.89.65:44762.service: Deactivated successfully. May 8 00:39:54.551618 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:39:54.554025 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. May 8 00:39:54.555429 systemd-logind[1461]: Removed session 2. May 8 00:39:54.611592 systemd[1]: Started sshd@2-172.237.145.97:22-139.178.89.65:44774.service - OpenSSH per-connection server daemon (139.178.89.65:44774). May 8 00:39:54.932439 sshd[1642]: Accepted publickey for core from 139.178.89.65 port 44774 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:54.934613 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:54.939615 systemd-logind[1461]: New session 3 of user core. May 8 00:39:54.945592 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:39:55.171978 sshd[1644]: Connection closed by 139.178.89.65 port 44774 May 8 00:39:55.172881 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 8 00:39:55.181057 systemd[1]: sshd@2-172.237.145.97:22-139.178.89.65:44774.service: Deactivated successfully. May 8 00:39:55.184821 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:39:55.186149 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. May 8 00:39:55.187460 systemd-logind[1461]: Removed session 3. May 8 00:39:55.236045 systemd[1]: Started sshd@3-172.237.145.97:22-139.178.89.65:44784.service - OpenSSH per-connection server daemon (139.178.89.65:44784). May 8 00:39:55.573376 sshd[1650]: Accepted publickey for core from 139.178.89.65 port 44784 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:55.574825 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:55.578980 systemd-logind[1461]: New session 4 of user core. May 8 00:39:55.589328 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:39:55.816976 sshd[1652]: Connection closed by 139.178.89.65 port 44784 May 8 00:39:55.817684 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 8 00:39:55.820538 systemd[1]: sshd@3-172.237.145.97:22-139.178.89.65:44784.service: Deactivated successfully. May 8 00:39:55.822576 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:39:55.823730 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. May 8 00:39:55.824953 systemd-logind[1461]: Removed session 4. May 8 00:39:55.886442 systemd[1]: Started sshd@4-172.237.145.97:22-139.178.89.65:44796.service - OpenSSH per-connection server daemon (139.178.89.65:44796). May 8 00:39:56.207942 sshd[1658]: Accepted publickey for core from 139.178.89.65 port 44796 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:56.209363 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:56.213835 systemd-logind[1461]: New session 5 of user core. May 8 00:39:56.223309 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:39:56.412916 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:39:56.413373 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:56.430920 sudo[1661]: pam_unix(sudo:session): session closed for user root May 8 00:39:56.481167 sshd[1660]: Connection closed by 139.178.89.65 port 44796 May 8 00:39:56.482499 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 8 00:39:56.485869 systemd[1]: sshd@4-172.237.145.97:22-139.178.89.65:44796.service: Deactivated successfully. May 8 00:39:56.488427 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:39:56.490643 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. May 8 00:39:56.492088 systemd-logind[1461]: Removed session 5. May 8 00:39:56.547463 systemd[1]: Started sshd@5-172.237.145.97:22-139.178.89.65:44802.service - OpenSSH per-connection server daemon (139.178.89.65:44802). May 8 00:39:56.868031 sshd[1667]: Accepted publickey for core from 139.178.89.65 port 44802 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:56.869912 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:56.875622 systemd-logind[1461]: New session 6 of user core. May 8 00:39:56.886349 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:39:57.066805 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:39:57.067162 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:57.072796 sudo[1671]: pam_unix(sudo:session): session closed for user root May 8 00:39:57.079762 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:39:57.080100 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:57.101764 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:39:57.136620 augenrules[1693]: No rules May 8 00:39:57.138660 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:39:57.138944 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:39:57.140642 sudo[1670]: pam_unix(sudo:session): session closed for user root May 8 00:39:57.190484 sshd[1669]: Connection closed by 139.178.89.65 port 44802 May 8 00:39:57.191050 sshd-session[1667]: pam_unix(sshd:session): session closed for user core May 8 00:39:57.194873 systemd[1]: sshd@5-172.237.145.97:22-139.178.89.65:44802.service: Deactivated successfully. May 8 00:39:57.197551 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:39:57.199291 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. May 8 00:39:57.200460 systemd-logind[1461]: Removed session 6. May 8 00:39:57.255492 systemd[1]: Started sshd@6-172.237.145.97:22-139.178.89.65:51584.service - OpenSSH per-connection server daemon (139.178.89.65:51584). May 8 00:39:57.576539 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 51584 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:57.578396 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:57.583477 systemd-logind[1461]: New session 7 of user core. May 8 00:39:57.597346 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:39:57.771629 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:39:57.771997 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:58.043430 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:39:58.043569 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:39:58.285996 dockerd[1723]: time="2025-05-08T00:39:58.285362439Z" level=info msg="Starting up" May 8 00:39:58.370495 dockerd[1723]: time="2025-05-08T00:39:58.369947454Z" level=info msg="Loading containers: start." May 8 00:39:58.503261 kernel: Initializing XFRM netlink socket May 8 00:39:58.577126 systemd-networkd[1384]: docker0: Link UP May 8 00:39:58.601324 dockerd[1723]: time="2025-05-08T00:39:58.601288933Z" level=info msg="Loading containers: done." May 8 00:39:58.614590 dockerd[1723]: time="2025-05-08T00:39:58.614556459Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:39:58.614717 dockerd[1723]: time="2025-05-08T00:39:58.614620589Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:39:58.614751 dockerd[1723]: time="2025-05-08T00:39:58.614720289Z" level=info msg="Daemon has completed initialization" May 8 00:39:58.641251 dockerd[1723]: time="2025-05-08T00:39:58.641038083Z" level=info msg="API listen on /run/docker.sock" May 8 00:39:58.641266 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:39:59.240260 containerd[1484]: time="2025-05-08T00:39:59.240101944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:40:00.008035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283177969.mount: Deactivated successfully. May 8 00:40:01.159395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:40:01.169473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:01.333514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:01.333658 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:40:01.380786 kubelet[1976]: E0508 00:40:01.380123 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:40:01.388529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:40:01.388730 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:40:01.389500 systemd[1]: kubelet.service: Consumed 183ms CPU time, 94.9M memory peak. May 8 00:40:02.203148 containerd[1484]: time="2025-05-08T00:40:02.203098561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:02.205250 containerd[1484]: time="2025-05-08T00:40:02.204995839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:40:02.206102 containerd[1484]: time="2025-05-08T00:40:02.206061998Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:02.209125 containerd[1484]: time="2025-05-08T00:40:02.208721475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:02.209919 containerd[1484]: time="2025-05-08T00:40:02.209886904Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.96975063s" May 8 00:40:02.209966 containerd[1484]: time="2025-05-08T00:40:02.209919624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:40:02.236719 containerd[1484]: time="2025-05-08T00:40:02.236682367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:40:04.611917 containerd[1484]: time="2025-05-08T00:40:04.611857342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:04.612863 containerd[1484]: time="2025-05-08T00:40:04.612825681Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:40:04.613444 containerd[1484]: time="2025-05-08T00:40:04.613390500Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:04.615742 containerd[1484]: time="2025-05-08T00:40:04.615719808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:04.616838 containerd[1484]: time="2025-05-08T00:40:04.616619907Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.37977883s" May 8 00:40:04.616838 containerd[1484]: time="2025-05-08T00:40:04.616650297Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:40:04.639100 containerd[1484]: time="2025-05-08T00:40:04.639080465Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:40:06.244032 containerd[1484]: time="2025-05-08T00:40:06.242985571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.244032 containerd[1484]: time="2025-05-08T00:40:06.243992700Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:40:06.244609 containerd[1484]: time="2025-05-08T00:40:06.244586499Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.247607 containerd[1484]: time="2025-05-08T00:40:06.247578836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.248964 containerd[1484]: time="2025-05-08T00:40:06.248904115Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.60971011s" May 8 00:40:06.249009 containerd[1484]: time="2025-05-08T00:40:06.248965745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:40:06.284269 containerd[1484]: time="2025-05-08T00:40:06.284174410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:40:07.555395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886968650.mount: Deactivated successfully. May 8 00:40:07.839084 containerd[1484]: time="2025-05-08T00:40:07.838083706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:07.839084 containerd[1484]: time="2025-05-08T00:40:07.838967805Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:40:07.839679 containerd[1484]: time="2025-05-08T00:40:07.839630004Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:07.841189 containerd[1484]: time="2025-05-08T00:40:07.841157663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:07.842254 containerd[1484]: time="2025-05-08T00:40:07.841855052Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.556075404s" May 8 00:40:07.842254 containerd[1484]: time="2025-05-08T00:40:07.841885792Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:40:07.866963 containerd[1484]: time="2025-05-08T00:40:07.866789567Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:40:08.573371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663180813.mount: Deactivated successfully. May 8 00:40:09.559256 containerd[1484]: time="2025-05-08T00:40:09.557826666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:09.559743 containerd[1484]: time="2025-05-08T00:40:09.559406094Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:40:09.562533 containerd[1484]: time="2025-05-08T00:40:09.560775123Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:09.563527 containerd[1484]: time="2025-05-08T00:40:09.563481000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:09.564200 containerd[1484]: time="2025-05-08T00:40:09.564166000Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.697350053s" May 8 00:40:09.564263 containerd[1484]: time="2025-05-08T00:40:09.564199630Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:40:09.587880 containerd[1484]: time="2025-05-08T00:40:09.587842876Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:40:10.288250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708337254.mount: Deactivated successfully. May 8 00:40:10.293030 containerd[1484]: time="2025-05-08T00:40:10.292981661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:10.293711 containerd[1484]: time="2025-05-08T00:40:10.293664750Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:40:10.295265 containerd[1484]: time="2025-05-08T00:40:10.294114870Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:10.298220 containerd[1484]: time="2025-05-08T00:40:10.295840318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:10.298220 containerd[1484]: time="2025-05-08T00:40:10.296547287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 708.675481ms" May 8 00:40:10.298220 containerd[1484]: time="2025-05-08T00:40:10.296570137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:40:10.317085 containerd[1484]: time="2025-05-08T00:40:10.317063127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:40:11.073996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606406050.mount: Deactivated successfully. May 8 00:40:11.409421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:40:11.414362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:11.561526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:11.561561 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:40:11.606466 kubelet[2129]: E0508 00:40:11.606402 2129 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:40:11.610736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:40:11.610936 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:40:11.611493 systemd[1]: kubelet.service: Consumed 169ms CPU time, 95.9M memory peak. May 8 00:40:13.664452 containerd[1484]: time="2025-05-08T00:40:13.664384079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:13.665358 containerd[1484]: time="2025-05-08T00:40:13.665319928Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:40:13.666196 containerd[1484]: time="2025-05-08T00:40:13.665817958Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:13.668362 containerd[1484]: time="2025-05-08T00:40:13.668330075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:13.669513 containerd[1484]: time="2025-05-08T00:40:13.669476824Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.352388167s" May 8 00:40:13.669513 containerd[1484]: time="2025-05-08T00:40:13.669511954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:40:15.543840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:15.544548 systemd[1]: kubelet.service: Consumed 169ms CPU time, 95.9M memory peak. May 8 00:40:15.552385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:15.572836 systemd[1]: Reload requested from client PID 2207 ('systemctl') (unit session-7.scope)... May 8 00:40:15.572850 systemd[1]: Reloading... May 8 00:40:15.707228 zram_generator::config[2261]: No configuration found. May 8 00:40:15.799687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:40:15.887185 systemd[1]: Reloading finished in 313 ms. May 8 00:40:15.928654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:15.933485 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:40:15.934128 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:15.934834 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:40:15.935081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:15.935112 systemd[1]: kubelet.service: Consumed 112ms CPU time, 83.6M memory peak. May 8 00:40:15.941763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:16.070121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:16.073694 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:40:16.108340 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:16.108340 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:40:16.108340 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:16.108641 kubelet[2309]: I0508 00:40:16.108392 2309 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:40:16.467521 kubelet[2309]: I0508 00:40:16.467490 2309 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:40:16.467521 kubelet[2309]: I0508 00:40:16.467515 2309 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:40:16.467675 kubelet[2309]: I0508 00:40:16.467656 2309 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:40:16.485782 kubelet[2309]: I0508 00:40:16.485756 2309 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:40:16.487232 kubelet[2309]: E0508 00:40:16.487190 2309 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.237.145.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.498223 kubelet[2309]: I0508 00:40:16.496270 2309 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:40:16.498477 kubelet[2309]: I0508 00:40:16.498448 2309 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:40:16.498685 kubelet[2309]: I0508 00:40:16.498477 2309 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-145-97","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:40:16.499170 kubelet[2309]: I0508 00:40:16.499152 2309 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:40:16.499170 kubelet[2309]: I0508 00:40:16.499171 2309 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:40:16.499320 kubelet[2309]: I0508 00:40:16.499304 2309 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:16.500218 kubelet[2309]: I0508 00:40:16.500187 2309 kubelet.go:400] "Attempting to sync node with API server" May 8 00:40:16.500319 kubelet[2309]: I0508 00:40:16.500302 2309 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:40:16.500363 kubelet[2309]: I0508 00:40:16.500333 2309 kubelet.go:312] "Adding apiserver pod source" May 8 00:40:16.500363 kubelet[2309]: I0508 00:40:16.500347 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:40:16.501244 kubelet[2309]: W0508 00:40:16.501093 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.145.97:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-97&limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.501244 kubelet[2309]: E0508 00:40:16.501138 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.237.145.97:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-97&limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.504027 kubelet[2309]: I0508 00:40:16.504006 2309 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:40:16.506054 kubelet[2309]: I0508 00:40:16.505647 2309 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:40:16.506054 kubelet[2309]: W0508 00:40:16.505700 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:40:16.507171 kubelet[2309]: I0508 00:40:16.506189 2309 server.go:1264] "Started kubelet" May 8 00:40:16.507171 kubelet[2309]: W0508 00:40:16.506289 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.145.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.507171 kubelet[2309]: E0508 00:40:16.506319 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.237.145.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.507490 kubelet[2309]: I0508 00:40:16.507449 2309 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:40:16.508196 kubelet[2309]: I0508 00:40:16.508175 2309 server.go:455] "Adding debug handlers to kubelet server" May 8 00:40:16.511657 kubelet[2309]: I0508 00:40:16.511180 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:40:16.511657 kubelet[2309]: I0508 00:40:16.511439 2309 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:40:16.511657 kubelet[2309]: E0508 00:40:16.511530 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.145.97:6443/api/v1/namespaces/default/events\": dial tcp 172.237.145.97:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-145-97.183d66657985f74a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-145-97,UID:172-237-145-97,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-145-97,},FirstTimestamp:2025-05-08 00:40:16.506173258 +0000 UTC m=+0.428523943,LastTimestamp:2025-05-08 00:40:16.506173258 +0000 UTC m=+0.428523943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-145-97,}" May 8 00:40:16.514239 kubelet[2309]: I0508 00:40:16.512849 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:40:16.516809 kubelet[2309]: E0508 00:40:16.516785 2309 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:40:16.516981 kubelet[2309]: E0508 00:40:16.516965 2309 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-237-145-97\" not found" May 8 00:40:16.517017 kubelet[2309]: I0508 00:40:16.516999 2309 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:40:16.517082 kubelet[2309]: I0508 00:40:16.517065 2309 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:40:16.517124 kubelet[2309]: I0508 00:40:16.517110 2309 reconciler.go:26] "Reconciler: start to sync state" May 8 00:40:16.517357 kubelet[2309]: W0508 00:40:16.517324 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.145.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.517389 kubelet[2309]: E0508 00:40:16.517359 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.237.145.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.517548 kubelet[2309]: E0508 00:40:16.517520 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-97?timeout=10s\": dial tcp 172.237.145.97:6443: connect: connection refused" interval="200ms" May 8 00:40:16.517946 kubelet[2309]: I0508 00:40:16.517924 2309 factory.go:221] Registration of the systemd container factory successfully May 8 00:40:16.518011 kubelet[2309]: I0508 00:40:16.517993 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:40:16.520041 kubelet[2309]: I0508 00:40:16.519073 2309 factory.go:221] Registration of the containerd container factory successfully May 8 00:40:16.529604 kubelet[2309]: I0508 00:40:16.529563 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:40:16.530696 kubelet[2309]: I0508 00:40:16.530677 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:40:16.530743 kubelet[2309]: I0508 00:40:16.530700 2309 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:40:16.530743 kubelet[2309]: I0508 00:40:16.530713 2309 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:40:16.530781 kubelet[2309]: E0508 00:40:16.530747 2309 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:40:16.538629 kubelet[2309]: W0508 00:40:16.538584 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.145.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.538629 kubelet[2309]: E0508 00:40:16.538616 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.237.145.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:16.547169 kubelet[2309]: I0508 00:40:16.547117 2309 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:40:16.547169 kubelet[2309]: I0508 00:40:16.547130 2309 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:40:16.547169 kubelet[2309]: I0508 00:40:16.547154 2309 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:16.548817 kubelet[2309]: I0508 00:40:16.548803 2309 policy_none.go:49] "None policy: Start" May 8 00:40:16.549323 kubelet[2309]: I0508 00:40:16.549308 2309 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:40:16.549367 kubelet[2309]: I0508 00:40:16.549338 2309 state_mem.go:35] "Initializing new in-memory state store" May 8 00:40:16.556090 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:40:16.564614 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:40:16.573609 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:40:16.577742 kubelet[2309]: I0508 00:40:16.577726 2309 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:40:16.579145 kubelet[2309]: I0508 00:40:16.578262 2309 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:40:16.579145 kubelet[2309]: I0508 00:40:16.579019 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:40:16.580090 kubelet[2309]: E0508 00:40:16.580057 2309 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-145-97\" not found" May 8 00:40:16.619652 kubelet[2309]: I0508 00:40:16.619431 2309 kubelet_node_status.go:73] "Attempting to register node" node="172-237-145-97" May 8 00:40:16.619806 kubelet[2309]: E0508 00:40:16.619788 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.237.145.97:6443/api/v1/nodes\": dial tcp 172.237.145.97:6443: connect: connection refused" node="172-237-145-97" May 8 00:40:16.630877 kubelet[2309]: I0508 00:40:16.630848 2309 topology_manager.go:215] "Topology Admit Handler" podUID="a528a68a361216b63bfabf733c676f76" podNamespace="kube-system" podName="kube-apiserver-172-237-145-97" May 8 00:40:16.631926 kubelet[2309]: I0508 00:40:16.631908 2309 topology_manager.go:215] "Topology Admit Handler" podUID="89c75020c18d4da73453afb71f08ce37" podNamespace="kube-system" podName="kube-controller-manager-172-237-145-97" May 8 00:40:16.633148 kubelet[2309]: I0508 00:40:16.633130 2309 topology_manager.go:215] "Topology Admit Handler" podUID="c9aea40feafa59c426f86e44b9ad443e" podNamespace="kube-system" podName="kube-scheduler-172-237-145-97" May 8 00:40:16.639807 systemd[1]: Created slice kubepods-burstable-poda528a68a361216b63bfabf733c676f76.slice - libcontainer container kubepods-burstable-poda528a68a361216b63bfabf733c676f76.slice. May 8 00:40:16.659331 systemd[1]: Created slice kubepods-burstable-pod89c75020c18d4da73453afb71f08ce37.slice - libcontainer container kubepods-burstable-pod89c75020c18d4da73453afb71f08ce37.slice. May 8 00:40:16.671800 systemd[1]: Created slice kubepods-burstable-podc9aea40feafa59c426f86e44b9ad443e.slice - libcontainer container kubepods-burstable-podc9aea40feafa59c426f86e44b9ad443e.slice. May 8 00:40:16.718399 kubelet[2309]: I0508 00:40:16.718337 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:16.718399 kubelet[2309]: I0508 00:40:16.718364 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9aea40feafa59c426f86e44b9ad443e-kubeconfig\") pod \"kube-scheduler-172-237-145-97\" (UID: \"c9aea40feafa59c426f86e44b9ad443e\") " pod="kube-system/kube-scheduler-172-237-145-97" May 8 00:40:16.718399 kubelet[2309]: I0508 00:40:16.718380 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a528a68a361216b63bfabf733c676f76-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-145-97\" (UID: \"a528a68a361216b63bfabf733c676f76\") " pod="kube-system/kube-apiserver-172-237-145-97" May 8 00:40:16.718399 kubelet[2309]: I0508 00:40:16.718394 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-kubeconfig\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:16.718497 kubelet[2309]: I0508 00:40:16.718408 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-ca-certs\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:16.718497 kubelet[2309]: I0508 00:40:16.718421 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-flexvolume-dir\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:16.718497 kubelet[2309]: I0508 00:40:16.718433 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-k8s-certs\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:16.718497 kubelet[2309]: I0508 00:40:16.718447 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a528a68a361216b63bfabf733c676f76-ca-certs\") pod \"kube-apiserver-172-237-145-97\" (UID: \"a528a68a361216b63bfabf733c676f76\") " pod="kube-system/kube-apiserver-172-237-145-97" May 8 00:40:16.718497 kubelet[2309]: I0508 00:40:16.718459 2309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a528a68a361216b63bfabf733c676f76-k8s-certs\") pod \"kube-apiserver-172-237-145-97\" (UID: \"a528a68a361216b63bfabf733c676f76\") " pod="kube-system/kube-apiserver-172-237-145-97" May 8 00:40:16.718814 kubelet[2309]: E0508 00:40:16.718769 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-97?timeout=10s\": dial tcp 172.237.145.97:6443: connect: connection refused" interval="400ms" May 8 00:40:16.821073 kubelet[2309]: I0508 00:40:16.821058 2309 kubelet_node_status.go:73] "Attempting to register node" node="172-237-145-97" May 8 00:40:16.821264 kubelet[2309]: E0508 00:40:16.821246 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.237.145.97:6443/api/v1/nodes\": dial tcp 172.237.145.97:6443: connect: connection refused" node="172-237-145-97" May 8 00:40:16.958087 kubelet[2309]: E0508 00:40:16.958047 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:16.958642 containerd[1484]: time="2025-05-08T00:40:16.958587785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-145-97,Uid:a528a68a361216b63bfabf733c676f76,Namespace:kube-system,Attempt:0,}" May 8 00:40:16.970199 kubelet[2309]: E0508 00:40:16.970125 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:16.970665 containerd[1484]: time="2025-05-08T00:40:16.970360633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-145-97,Uid:89c75020c18d4da73453afb71f08ce37,Namespace:kube-system,Attempt:0,}" May 8 00:40:16.973873 kubelet[2309]: E0508 00:40:16.973833 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:16.974133 containerd[1484]: time="2025-05-08T00:40:16.974047900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-145-97,Uid:c9aea40feafa59c426f86e44b9ad443e,Namespace:kube-system,Attempt:0,}" May 8 00:40:17.119863 kubelet[2309]: E0508 00:40:17.119781 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-97?timeout=10s\": dial tcp 172.237.145.97:6443: connect: connection refused" interval="800ms" May 8 00:40:17.223612 kubelet[2309]: I0508 00:40:17.223463 2309 kubelet_node_status.go:73] "Attempting to register node" node="172-237-145-97" May 8 00:40:17.224319 kubelet[2309]: E0508 00:40:17.224247 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.237.145.97:6443/api/v1/nodes\": dial tcp 172.237.145.97:6443: connect: connection refused" node="172-237-145-97" May 8 00:40:17.376271 kubelet[2309]: W0508 00:40:17.376156 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.145.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:17.376271 kubelet[2309]: E0508 00:40:17.376260 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.237.145.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:17.624566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount614305670.mount: Deactivated successfully. May 8 00:40:17.629065 containerd[1484]: time="2025-05-08T00:40:17.629022915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:17.630819 containerd[1484]: time="2025-05-08T00:40:17.630745223Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:17.631590 containerd[1484]: time="2025-05-08T00:40:17.631533502Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:17.632330 containerd[1484]: time="2025-05-08T00:40:17.632283971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:40:17.632672 containerd[1484]: time="2025-05-08T00:40:17.632605031Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:40:17.632672 containerd[1484]: time="2025-05-08T00:40:17.632651891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:40:17.633117 containerd[1484]: time="2025-05-08T00:40:17.633072711Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:17.636694 containerd[1484]: time="2025-05-08T00:40:17.636655277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:17.638231 containerd[1484]: time="2025-05-08T00:40:17.637519326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 667.102133ms" May 8 00:40:17.639022 containerd[1484]: time="2025-05-08T00:40:17.638975855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.872585ms" May 8 00:40:17.644276 containerd[1484]: time="2025-05-08T00:40:17.644243669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 685.563674ms" May 8 00:40:17.703749 kubelet[2309]: W0508 00:40:17.703680 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.145.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:17.704146 kubelet[2309]: E0508 00:40:17.703972 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.237.145.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:17.743962 containerd[1484]: time="2025-05-08T00:40:17.743665220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:17.743962 containerd[1484]: time="2025-05-08T00:40:17.743729720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:17.743962 containerd[1484]: time="2025-05-08T00:40:17.743748780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:17.743962 containerd[1484]: time="2025-05-08T00:40:17.743836710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:17.746121 containerd[1484]: time="2025-05-08T00:40:17.745967078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:17.746121 containerd[1484]: time="2025-05-08T00:40:17.746061718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:17.746121 containerd[1484]: time="2025-05-08T00:40:17.746096358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:17.746325 containerd[1484]: time="2025-05-08T00:40:17.746186028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:17.750351 containerd[1484]: time="2025-05-08T00:40:17.750283613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:17.750556 containerd[1484]: time="2025-05-08T00:40:17.750514233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:17.750587 containerd[1484]: time="2025-05-08T00:40:17.750563553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:17.750856 containerd[1484]: time="2025-05-08T00:40:17.750801133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:17.778405 systemd[1]: Started cri-containerd-6b14860d2707cc976c4ee93079432174bdce3ba4da5496aefca23aa6e70edcfd.scope - libcontainer container 6b14860d2707cc976c4ee93079432174bdce3ba4da5496aefca23aa6e70edcfd. May 8 00:40:17.783919 systemd[1]: Started cri-containerd-b9f43bbe6997ba9c6ac9dd344821e968761c7ebca16e65405bb04cebf67ee9c2.scope - libcontainer container b9f43bbe6997ba9c6ac9dd344821e968761c7ebca16e65405bb04cebf67ee9c2. May 8 00:40:17.786735 systemd[1]: Started cri-containerd-bb7b73a935368c6686567d619906897500374f84c14d98b8bcf3270e19678610.scope - libcontainer container bb7b73a935368c6686567d619906897500374f84c14d98b8bcf3270e19678610. May 8 00:40:17.850394 containerd[1484]: time="2025-05-08T00:40:17.850335463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-145-97,Uid:89c75020c18d4da73453afb71f08ce37,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb7b73a935368c6686567d619906897500374f84c14d98b8bcf3270e19678610\"" May 8 00:40:17.852095 kubelet[2309]: E0508 00:40:17.851797 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:17.854775 containerd[1484]: time="2025-05-08T00:40:17.854750309Z" level=info msg="CreateContainer within sandbox \"bb7b73a935368c6686567d619906897500374f84c14d98b8bcf3270e19678610\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:40:17.866764 containerd[1484]: time="2025-05-08T00:40:17.866674817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-145-97,Uid:a528a68a361216b63bfabf733c676f76,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b14860d2707cc976c4ee93079432174bdce3ba4da5496aefca23aa6e70edcfd\"" May 8 00:40:17.867442 kubelet[2309]: E0508 00:40:17.867373 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:17.869717 containerd[1484]: time="2025-05-08T00:40:17.869654274Z" level=info msg="CreateContainer within sandbox \"6b14860d2707cc976c4ee93079432174bdce3ba4da5496aefca23aa6e70edcfd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:40:17.874942 containerd[1484]: time="2025-05-08T00:40:17.874790939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-145-97,Uid:c9aea40feafa59c426f86e44b9ad443e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9f43bbe6997ba9c6ac9dd344821e968761c7ebca16e65405bb04cebf67ee9c2\"" May 8 00:40:17.876381 kubelet[2309]: E0508 00:40:17.876361 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:17.879159 containerd[1484]: time="2025-05-08T00:40:17.879120885Z" level=info msg="CreateContainer within sandbox \"b9f43bbe6997ba9c6ac9dd344821e968761c7ebca16e65405bb04cebf67ee9c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:40:17.881656 containerd[1484]: time="2025-05-08T00:40:17.881178133Z" level=info msg="CreateContainer within sandbox \"bb7b73a935368c6686567d619906897500374f84c14d98b8bcf3270e19678610\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"83e2f4e44555048e2ec3cb7bed85c45827a42c4cc0725b6596f8d6707c0e1855\"" May 8 00:40:17.882076 containerd[1484]: time="2025-05-08T00:40:17.882036422Z" level=info msg="StartContainer for \"83e2f4e44555048e2ec3cb7bed85c45827a42c4cc0725b6596f8d6707c0e1855\"" May 8 00:40:17.885264 containerd[1484]: time="2025-05-08T00:40:17.885180149Z" level=info msg="CreateContainer within sandbox \"6b14860d2707cc976c4ee93079432174bdce3ba4da5496aefca23aa6e70edcfd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2eb4240f3c1241282c729da8c75b1421722f42af5a9613a0668753eb60add1c4\"" May 8 00:40:17.885726 containerd[1484]: time="2025-05-08T00:40:17.885701898Z" level=info msg="StartContainer for \"2eb4240f3c1241282c729da8c75b1421722f42af5a9613a0668753eb60add1c4\"" May 8 00:40:17.894234 containerd[1484]: time="2025-05-08T00:40:17.894158400Z" level=info msg="CreateContainer within sandbox \"b9f43bbe6997ba9c6ac9dd344821e968761c7ebca16e65405bb04cebf67ee9c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"304a3e1e9fd328a97f6e8a9a0d057028003e26ed68808796eae59a629bad55a0\"" May 8 00:40:17.895860 containerd[1484]: time="2025-05-08T00:40:17.894557579Z" level=info msg="StartContainer for \"304a3e1e9fd328a97f6e8a9a0d057028003e26ed68808796eae59a629bad55a0\"" May 8 00:40:17.921085 kubelet[2309]: W0508 00:40:17.920650 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.145.97:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-97&limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:17.921085 kubelet[2309]: E0508 00:40:17.920722 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.237.145.97:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-97&limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:17.921085 kubelet[2309]: E0508 00:40:17.921016 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-97?timeout=10s\": dial tcp 172.237.145.97:6443: connect: connection refused" interval="1.6s" May 8 00:40:17.934380 systemd[1]: Started cri-containerd-2eb4240f3c1241282c729da8c75b1421722f42af5a9613a0668753eb60add1c4.scope - libcontainer container 2eb4240f3c1241282c729da8c75b1421722f42af5a9613a0668753eb60add1c4. May 8 00:40:17.937965 systemd[1]: Started cri-containerd-83e2f4e44555048e2ec3cb7bed85c45827a42c4cc0725b6596f8d6707c0e1855.scope - libcontainer container 83e2f4e44555048e2ec3cb7bed85c45827a42c4cc0725b6596f8d6707c0e1855. May 8 00:40:17.948367 systemd[1]: Started cri-containerd-304a3e1e9fd328a97f6e8a9a0d057028003e26ed68808796eae59a629bad55a0.scope - libcontainer container 304a3e1e9fd328a97f6e8a9a0d057028003e26ed68808796eae59a629bad55a0. May 8 00:40:17.974153 kubelet[2309]: W0508 00:40:17.974074 2309 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.145.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:17.974236 kubelet[2309]: E0508 00:40:17.974160 2309 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.237.145.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.237.145.97:6443: connect: connection refused May 8 00:40:18.005621 containerd[1484]: time="2025-05-08T00:40:18.005498728Z" level=info msg="StartContainer for \"83e2f4e44555048e2ec3cb7bed85c45827a42c4cc0725b6596f8d6707c0e1855\" returns successfully" May 8 00:40:18.012589 containerd[1484]: time="2025-05-08T00:40:18.012233741Z" level=info msg="StartContainer for \"2eb4240f3c1241282c729da8c75b1421722f42af5a9613a0668753eb60add1c4\" returns successfully" May 8 00:40:18.030286 kubelet[2309]: I0508 00:40:18.029989 2309 kubelet_node_status.go:73] "Attempting to register node" node="172-237-145-97" May 8 00:40:18.030541 kubelet[2309]: E0508 00:40:18.030520 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.237.145.97:6443/api/v1/nodes\": dial tcp 172.237.145.97:6443: connect: connection refused" node="172-237-145-97" May 8 00:40:18.042622 containerd[1484]: time="2025-05-08T00:40:18.042592021Z" level=info msg="StartContainer for \"304a3e1e9fd328a97f6e8a9a0d057028003e26ed68808796eae59a629bad55a0\" returns successfully" May 8 00:40:18.548236 kubelet[2309]: E0508 00:40:18.547932 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:18.553072 kubelet[2309]: E0508 00:40:18.552900 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:18.555607 kubelet[2309]: E0508 00:40:18.555541 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:19.005293 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 8 00:40:19.524621 kubelet[2309]: E0508 00:40:19.524574 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-145-97\" not found" node="172-237-145-97" May 8 00:40:19.557030 kubelet[2309]: E0508 00:40:19.557004 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:19.631856 kubelet[2309]: I0508 00:40:19.631826 2309 kubelet_node_status.go:73] "Attempting to register node" node="172-237-145-97" May 8 00:40:19.641019 kubelet[2309]: I0508 00:40:19.640988 2309 kubelet_node_status.go:76] "Successfully registered node" node="172-237-145-97" May 8 00:40:19.646584 kubelet[2309]: E0508 00:40:19.646567 2309 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-237-145-97\" not found" May 8 00:40:19.747537 kubelet[2309]: E0508 00:40:19.747470 2309 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-237-145-97\" not found" May 8 00:40:19.848238 kubelet[2309]: E0508 00:40:19.848005 2309 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-237-145-97\" not found" May 8 00:40:20.505869 kubelet[2309]: I0508 00:40:20.505441 2309 apiserver.go:52] "Watching apiserver" May 8 00:40:20.517901 kubelet[2309]: I0508 00:40:20.517872 2309 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:40:21.510366 systemd[1]: Reload requested from client PID 2591 ('systemctl') (unit session-7.scope)... May 8 00:40:21.510382 systemd[1]: Reloading... May 8 00:40:21.596257 zram_generator::config[2635]: No configuration found. May 8 00:40:21.708185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:40:21.811113 systemd[1]: Reloading finished in 300 ms. May 8 00:40:21.834735 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:21.836159 kubelet[2309]: I0508 00:40:21.834983 2309 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:40:21.848518 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:40:21.848805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:21.848849 systemd[1]: kubelet.service: Consumed 781ms CPU time, 112.6M memory peak. May 8 00:40:21.854807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:22.016171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:22.023898 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:40:22.086631 kubelet[2686]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:22.086631 kubelet[2686]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:40:22.086631 kubelet[2686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:22.086631 kubelet[2686]: I0508 00:40:22.084875 2686 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:40:22.090154 kubelet[2686]: I0508 00:40:22.090135 2686 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:40:22.090154 kubelet[2686]: I0508 00:40:22.090154 2686 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:40:22.090417 kubelet[2686]: I0508 00:40:22.090390 2686 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:40:22.091996 kubelet[2686]: I0508 00:40:22.091961 2686 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:40:22.095302 kubelet[2686]: I0508 00:40:22.094345 2686 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:40:22.103135 kubelet[2686]: I0508 00:40:22.103116 2686 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:40:22.103405 kubelet[2686]: I0508 00:40:22.103371 2686 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:40:22.103837 kubelet[2686]: I0508 00:40:22.103393 2686 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-145-97","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:40:22.103837 kubelet[2686]: I0508 00:40:22.103753 2686 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:40:22.103837 kubelet[2686]: I0508 00:40:22.103763 2686 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:40:22.103837 kubelet[2686]: I0508 00:40:22.103835 2686 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:22.103999 kubelet[2686]: I0508 00:40:22.103950 2686 kubelet.go:400] "Attempting to sync node with API server" May 8 00:40:22.104577 kubelet[2686]: I0508 00:40:22.104354 2686 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:40:22.104577 kubelet[2686]: I0508 00:40:22.104383 2686 kubelet.go:312] "Adding apiserver pod source" May 8 00:40:22.104577 kubelet[2686]: I0508 00:40:22.104425 2686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:40:22.110807 kubelet[2686]: I0508 00:40:22.110399 2686 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:40:22.110807 kubelet[2686]: I0508 00:40:22.110542 2686 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:40:22.110882 kubelet[2686]: I0508 00:40:22.110854 2686 server.go:1264] "Started kubelet" May 8 00:40:22.113921 kubelet[2686]: I0508 00:40:22.113897 2686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:40:22.116163 kubelet[2686]: I0508 00:40:22.114602 2686 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:40:22.116163 kubelet[2686]: I0508 00:40:22.115367 2686 server.go:455] "Adding debug handlers to kubelet server" May 8 00:40:22.117422 kubelet[2686]: I0508 00:40:22.117373 2686 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:40:22.117587 kubelet[2686]: I0508 00:40:22.117562 2686 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:40:22.118390 kubelet[2686]: I0508 00:40:22.118369 2686 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:40:22.121932 kubelet[2686]: I0508 00:40:22.121903 2686 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:40:22.122053 kubelet[2686]: I0508 00:40:22.122029 2686 reconciler.go:26] "Reconciler: start to sync state" May 8 00:40:22.124945 kubelet[2686]: E0508 00:40:22.124910 2686 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:40:22.127127 kubelet[2686]: I0508 00:40:22.126911 2686 factory.go:221] Registration of the containerd container factory successfully May 8 00:40:22.127127 kubelet[2686]: I0508 00:40:22.126925 2686 factory.go:221] Registration of the systemd container factory successfully May 8 00:40:22.127127 kubelet[2686]: I0508 00:40:22.126990 2686 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:40:22.131117 kubelet[2686]: I0508 00:40:22.131081 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:40:22.134912 kubelet[2686]: I0508 00:40:22.134540 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:40:22.134912 kubelet[2686]: I0508 00:40:22.134576 2686 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:40:22.134912 kubelet[2686]: I0508 00:40:22.134600 2686 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:40:22.134912 kubelet[2686]: E0508 00:40:22.134651 2686 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:40:22.173888 kubelet[2686]: I0508 00:40:22.173851 2686 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:40:22.173888 kubelet[2686]: I0508 00:40:22.173868 2686 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:40:22.173888 kubelet[2686]: I0508 00:40:22.173885 2686 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:22.174037 kubelet[2686]: I0508 00:40:22.174004 2686 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:40:22.174037 kubelet[2686]: I0508 00:40:22.174020 2686 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:40:22.174082 kubelet[2686]: I0508 00:40:22.174042 2686 policy_none.go:49] "None policy: Start" May 8 00:40:22.174788 kubelet[2686]: I0508 00:40:22.174750 2686 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:40:22.174788 kubelet[2686]: I0508 00:40:22.174770 2686 state_mem.go:35] "Initializing new in-memory state store" May 8 00:40:22.174921 kubelet[2686]: I0508 00:40:22.174851 2686 state_mem.go:75] "Updated machine memory state" May 8 00:40:22.180245 kubelet[2686]: I0508 00:40:22.180083 2686 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:40:22.180303 kubelet[2686]: I0508 00:40:22.180256 2686 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:40:22.180408 kubelet[2686]: I0508 00:40:22.180385 2686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:40:22.225996 kubelet[2686]: I0508 00:40:22.225962 2686 kubelet_node_status.go:73] "Attempting to register node" node="172-237-145-97" May 8 00:40:22.231382 kubelet[2686]: I0508 00:40:22.231363 2686 kubelet_node_status.go:112] "Node was previously registered" node="172-237-145-97" May 8 00:40:22.231430 kubelet[2686]: I0508 00:40:22.231415 2686 kubelet_node_status.go:76] "Successfully registered node" node="172-237-145-97" May 8 00:40:22.237335 kubelet[2686]: I0508 00:40:22.236512 2686 topology_manager.go:215] "Topology Admit Handler" podUID="89c75020c18d4da73453afb71f08ce37" podNamespace="kube-system" podName="kube-controller-manager-172-237-145-97" May 8 00:40:22.237335 kubelet[2686]: I0508 00:40:22.237239 2686 topology_manager.go:215] "Topology Admit Handler" podUID="c9aea40feafa59c426f86e44b9ad443e" podNamespace="kube-system" podName="kube-scheduler-172-237-145-97" May 8 00:40:22.237335 kubelet[2686]: I0508 00:40:22.237274 2686 topology_manager.go:215] "Topology Admit Handler" podUID="a528a68a361216b63bfabf733c676f76" podNamespace="kube-system" podName="kube-apiserver-172-237-145-97" May 8 00:40:22.323389 kubelet[2686]: I0508 00:40:22.323320 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a528a68a361216b63bfabf733c676f76-ca-certs\") pod \"kube-apiserver-172-237-145-97\" (UID: \"a528a68a361216b63bfabf733c676f76\") " pod="kube-system/kube-apiserver-172-237-145-97" May 8 00:40:22.323389 kubelet[2686]: I0508 00:40:22.323356 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a528a68a361216b63bfabf733c676f76-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-145-97\" (UID: \"a528a68a361216b63bfabf733c676f76\") " pod="kube-system/kube-apiserver-172-237-145-97" May 8 00:40:22.323389 kubelet[2686]: I0508 00:40:22.323377 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-ca-certs\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:22.323389 kubelet[2686]: I0508 00:40:22.323395 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9aea40feafa59c426f86e44b9ad443e-kubeconfig\") pod \"kube-scheduler-172-237-145-97\" (UID: \"c9aea40feafa59c426f86e44b9ad443e\") " pod="kube-system/kube-scheduler-172-237-145-97" May 8 00:40:22.323701 kubelet[2686]: I0508 00:40:22.323411 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:22.323701 kubelet[2686]: I0508 00:40:22.323427 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a528a68a361216b63bfabf733c676f76-k8s-certs\") pod \"kube-apiserver-172-237-145-97\" (UID: \"a528a68a361216b63bfabf733c676f76\") " pod="kube-system/kube-apiserver-172-237-145-97" May 8 00:40:22.323701 kubelet[2686]: I0508 00:40:22.323440 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-flexvolume-dir\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:22.323701 kubelet[2686]: I0508 00:40:22.323456 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-k8s-certs\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:22.323701 kubelet[2686]: I0508 00:40:22.323523 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89c75020c18d4da73453afb71f08ce37-kubeconfig\") pod \"kube-controller-manager-172-237-145-97\" (UID: \"89c75020c18d4da73453afb71f08ce37\") " pod="kube-system/kube-controller-manager-172-237-145-97" May 8 00:40:22.553191 kubelet[2686]: E0508 00:40:22.552086 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:22.553586 kubelet[2686]: E0508 00:40:22.553524 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:22.553849 kubelet[2686]: E0508 00:40:22.553774 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:23.105830 kubelet[2686]: I0508 00:40:23.105786 2686 apiserver.go:52] "Watching apiserver" May 8 00:40:23.122752 kubelet[2686]: I0508 00:40:23.122718 2686 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:40:23.157674 kubelet[2686]: E0508 00:40:23.157636 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:23.158516 kubelet[2686]: E0508 00:40:23.158484 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:23.185169 kubelet[2686]: E0508 00:40:23.185023 2686 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-237-145-97\" already exists" pod="kube-system/kube-apiserver-172-237-145-97" May 8 00:40:23.185574 kubelet[2686]: E0508 00:40:23.185512 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:23.220939 kubelet[2686]: I0508 00:40:23.220869 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-145-97" podStartSLOduration=1.220841592 podStartE2EDuration="1.220841592s" podCreationTimestamp="2025-05-08 00:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:23.203478224 +0000 UTC m=+1.172871923" watchObservedRunningTime="2025-05-08 00:40:23.220841592 +0000 UTC m=+1.190235301" May 8 00:40:23.233300 kubelet[2686]: I0508 00:40:23.233241 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-145-97" podStartSLOduration=1.233228123 podStartE2EDuration="1.233228123s" podCreationTimestamp="2025-05-08 00:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:23.224054092 +0000 UTC m=+1.193447791" watchObservedRunningTime="2025-05-08 00:40:23.233228123 +0000 UTC m=+1.202621822" May 8 00:40:24.159022 kubelet[2686]: E0508 00:40:24.158975 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:26.618416 sudo[1705]: pam_unix(sudo:session): session closed for user root May 8 00:40:26.668027 sshd[1704]: Connection closed by 139.178.89.65 port 51584 May 8 00:40:26.668497 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 8 00:40:26.672344 systemd[1]: sshd@6-172.237.145.97:22-139.178.89.65:51584.service: Deactivated successfully. May 8 00:40:26.674424 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:40:26.674659 systemd[1]: session-7.scope: Consumed 3.766s CPU time, 256.4M memory peak. May 8 00:40:26.675894 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. May 8 00:40:26.677064 systemd-logind[1461]: Removed session 7. May 8 00:40:27.651475 kubelet[2686]: E0508 00:40:27.650719 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:27.663931 kubelet[2686]: I0508 00:40:27.663776 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-145-97" podStartSLOduration=5.663750551 podStartE2EDuration="5.663750551s" podCreationTimestamp="2025-05-08 00:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:23.234113661 +0000 UTC m=+1.203507360" watchObservedRunningTime="2025-05-08 00:40:27.663750551 +0000 UTC m=+5.633144250" May 8 00:40:28.163084 kubelet[2686]: E0508 00:40:28.163033 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:28.680570 kubelet[2686]: E0508 00:40:28.680544 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:29.165130 kubelet[2686]: E0508 00:40:29.165075 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:32.174464 kubelet[2686]: E0508 00:40:32.174375 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:33.168978 kubelet[2686]: E0508 00:40:33.168925 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:33.522755 update_engine[1462]: I20250508 00:40:33.522435 1462 update_attempter.cc:509] Updating boot flags... May 8 00:40:33.566336 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2771) May 8 00:40:33.645248 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2771) May 8 00:40:33.723239 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2771) May 8 00:40:35.530459 kubelet[2686]: I0508 00:40:35.530282 2686 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:40:35.530833 containerd[1484]: time="2025-05-08T00:40:35.530703723Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:40:35.531092 kubelet[2686]: I0508 00:40:35.531063 2686 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:40:36.128540 kubelet[2686]: I0508 00:40:36.127930 2686 topology_manager.go:215] "Topology Admit Handler" podUID="a2894b58-a15a-4a1b-a54f-c11604427829" podNamespace="kube-system" podName="kube-proxy-54fdd" May 8 00:40:36.137638 systemd[1]: Created slice kubepods-besteffort-poda2894b58_a15a_4a1b_a54f_c11604427829.slice - libcontainer container kubepods-besteffort-poda2894b58_a15a_4a1b_a54f_c11604427829.slice. May 8 00:40:36.216524 kubelet[2686]: I0508 00:40:36.216392 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2894b58-a15a-4a1b-a54f-c11604427829-kube-proxy\") pod \"kube-proxy-54fdd\" (UID: \"a2894b58-a15a-4a1b-a54f-c11604427829\") " pod="kube-system/kube-proxy-54fdd" May 8 00:40:36.216524 kubelet[2686]: I0508 00:40:36.216425 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2894b58-a15a-4a1b-a54f-c11604427829-xtables-lock\") pod \"kube-proxy-54fdd\" (UID: \"a2894b58-a15a-4a1b-a54f-c11604427829\") " pod="kube-system/kube-proxy-54fdd" May 8 00:40:36.216524 kubelet[2686]: I0508 00:40:36.216444 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2894b58-a15a-4a1b-a54f-c11604427829-lib-modules\") pod \"kube-proxy-54fdd\" (UID: \"a2894b58-a15a-4a1b-a54f-c11604427829\") " pod="kube-system/kube-proxy-54fdd" May 8 00:40:36.216524 kubelet[2686]: I0508 00:40:36.216460 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shv6p\" (UniqueName: \"kubernetes.io/projected/a2894b58-a15a-4a1b-a54f-c11604427829-kube-api-access-shv6p\") pod \"kube-proxy-54fdd\" (UID: \"a2894b58-a15a-4a1b-a54f-c11604427829\") " pod="kube-system/kube-proxy-54fdd" May 8 00:40:36.451600 kubelet[2686]: E0508 00:40:36.451568 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:36.452246 containerd[1484]: time="2025-05-08T00:40:36.452182104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54fdd,Uid:a2894b58-a15a-4a1b-a54f-c11604427829,Namespace:kube-system,Attempt:0,}" May 8 00:40:36.478426 containerd[1484]: time="2025-05-08T00:40:36.478363281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:36.478701 containerd[1484]: time="2025-05-08T00:40:36.478620548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:36.478701 containerd[1484]: time="2025-05-08T00:40:36.478676860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:36.480718 containerd[1484]: time="2025-05-08T00:40:36.479502783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:36.496066 systemd[1]: run-containerd-runc-k8s.io-56ddd041c58fc17f6f1be2996e29ac09922e09f907941145c86e1ae84035e414-runc.j3j0Vg.mount: Deactivated successfully. May 8 00:40:36.506331 systemd[1]: Started cri-containerd-56ddd041c58fc17f6f1be2996e29ac09922e09f907941145c86e1ae84035e414.scope - libcontainer container 56ddd041c58fc17f6f1be2996e29ac09922e09f907941145c86e1ae84035e414. May 8 00:40:36.533641 containerd[1484]: time="2025-05-08T00:40:36.533609574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54fdd,Uid:a2894b58-a15a-4a1b-a54f-c11604427829,Namespace:kube-system,Attempt:0,} returns sandbox id \"56ddd041c58fc17f6f1be2996e29ac09922e09f907941145c86e1ae84035e414\"" May 8 00:40:36.534757 kubelet[2686]: E0508 00:40:36.534735 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:36.538244 containerd[1484]: time="2025-05-08T00:40:36.538110378Z" level=info msg="CreateContainer within sandbox \"56ddd041c58fc17f6f1be2996e29ac09922e09f907941145c86e1ae84035e414\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:40:36.552584 containerd[1484]: time="2025-05-08T00:40:36.552560629Z" level=info msg="CreateContainer within sandbox \"56ddd041c58fc17f6f1be2996e29ac09922e09f907941145c86e1ae84035e414\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52482e6ea992e0f89b107454550f0f90e1469341950485bcd54d6cc51560288e\"" May 8 00:40:36.553291 containerd[1484]: time="2025-05-08T00:40:36.553152945Z" level=info msg="StartContainer for \"52482e6ea992e0f89b107454550f0f90e1469341950485bcd54d6cc51560288e\"" May 8 00:40:36.592701 systemd[1]: Started cri-containerd-52482e6ea992e0f89b107454550f0f90e1469341950485bcd54d6cc51560288e.scope - libcontainer container 52482e6ea992e0f89b107454550f0f90e1469341950485bcd54d6cc51560288e. May 8 00:40:36.611003 kubelet[2686]: I0508 00:40:36.610972 2686 topology_manager.go:215] "Topology Admit Handler" podUID="5e065e81-5821-423c-9926-a20a9d430401" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-2vzvk" May 8 00:40:36.618850 systemd[1]: Created slice kubepods-besteffort-pod5e065e81_5821_423c_9926_a20a9d430401.slice - libcontainer container kubepods-besteffort-pod5e065e81_5821_423c_9926_a20a9d430401.slice. May 8 00:40:36.645844 containerd[1484]: time="2025-05-08T00:40:36.645804166Z" level=info msg="StartContainer for \"52482e6ea992e0f89b107454550f0f90e1469341950485bcd54d6cc51560288e\" returns successfully" May 8 00:40:36.718909 kubelet[2686]: I0508 00:40:36.718789 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e065e81-5821-423c-9926-a20a9d430401-var-lib-calico\") pod \"tigera-operator-797db67f8-2vzvk\" (UID: \"5e065e81-5821-423c-9926-a20a9d430401\") " pod="tigera-operator/tigera-operator-797db67f8-2vzvk" May 8 00:40:36.718909 kubelet[2686]: I0508 00:40:36.718818 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4fk\" (UniqueName: \"kubernetes.io/projected/5e065e81-5821-423c-9926-a20a9d430401-kube-api-access-8h4fk\") pod \"tigera-operator-797db67f8-2vzvk\" (UID: \"5e065e81-5821-423c-9926-a20a9d430401\") " pod="tigera-operator/tigera-operator-797db67f8-2vzvk" May 8 00:40:36.921992 containerd[1484]: time="2025-05-08T00:40:36.921879653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2vzvk,Uid:5e065e81-5821-423c-9926-a20a9d430401,Namespace:tigera-operator,Attempt:0,}" May 8 00:40:36.945123 containerd[1484]: time="2025-05-08T00:40:36.944886391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:36.945123 containerd[1484]: time="2025-05-08T00:40:36.944936043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:36.945123 containerd[1484]: time="2025-05-08T00:40:36.944949773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:36.945123 containerd[1484]: time="2025-05-08T00:40:36.945024525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:36.970352 systemd[1]: Started cri-containerd-f9f3136bcd35df72ba4b9683645e4ae7367bf9302b410b94a8153ea0968088fc.scope - libcontainer container f9f3136bcd35df72ba4b9683645e4ae7367bf9302b410b94a8153ea0968088fc. May 8 00:40:37.009526 containerd[1484]: time="2025-05-08T00:40:37.009491459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2vzvk,Uid:5e065e81-5821-423c-9926-a20a9d430401,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9f3136bcd35df72ba4b9683645e4ae7367bf9302b410b94a8153ea0968088fc\"" May 8 00:40:37.011437 containerd[1484]: time="2025-05-08T00:40:37.011315056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:40:37.181904 kubelet[2686]: E0508 00:40:37.181260 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:37.806606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457449994.mount: Deactivated successfully. May 8 00:40:38.369552 containerd[1484]: time="2025-05-08T00:40:38.369498781Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:38.370540 containerd[1484]: time="2025-05-08T00:40:38.370502145Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:40:38.370677 containerd[1484]: time="2025-05-08T00:40:38.370657409Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:38.374947 containerd[1484]: time="2025-05-08T00:40:38.374832581Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:38.377805 containerd[1484]: time="2025-05-08T00:40:38.377776822Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.366393304s" May 8 00:40:38.377845 containerd[1484]: time="2025-05-08T00:40:38.377808343Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:40:38.381370 containerd[1484]: time="2025-05-08T00:40:38.381259366Z" level=info msg="CreateContainer within sandbox \"f9f3136bcd35df72ba4b9683645e4ae7367bf9302b410b94a8153ea0968088fc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:40:38.393946 containerd[1484]: time="2025-05-08T00:40:38.393924383Z" level=info msg="CreateContainer within sandbox \"f9f3136bcd35df72ba4b9683645e4ae7367bf9302b410b94a8153ea0968088fc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34717a21f80c70112e8fc7df7480ec28d9c6a3ea486b9c3e4cefde745ca1a903\"" May 8 00:40:38.395767 containerd[1484]: time="2025-05-08T00:40:38.394363344Z" level=info msg="StartContainer for \"34717a21f80c70112e8fc7df7480ec28d9c6a3ea486b9c3e4cefde745ca1a903\"" May 8 00:40:38.429385 systemd[1]: Started cri-containerd-34717a21f80c70112e8fc7df7480ec28d9c6a3ea486b9c3e4cefde745ca1a903.scope - libcontainer container 34717a21f80c70112e8fc7df7480ec28d9c6a3ea486b9c3e4cefde745ca1a903. May 8 00:40:38.458000 containerd[1484]: time="2025-05-08T00:40:38.457938586Z" level=info msg="StartContainer for \"34717a21f80c70112e8fc7df7480ec28d9c6a3ea486b9c3e4cefde745ca1a903\" returns successfully" May 8 00:40:39.198662 kubelet[2686]: I0508 00:40:39.198606 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-2vzvk" podStartSLOduration=1.830225878 podStartE2EDuration="3.198372366s" podCreationTimestamp="2025-05-08 00:40:36 +0000 UTC" firstStartedPulling="2025-05-08 00:40:37.010586177 +0000 UTC m=+14.979979876" lastFinishedPulling="2025-05-08 00:40:38.378732655 +0000 UTC m=+16.348126364" observedRunningTime="2025-05-08 00:40:39.1981196 +0000 UTC m=+17.167513309" watchObservedRunningTime="2025-05-08 00:40:39.198372366 +0000 UTC m=+17.167766065" May 8 00:40:39.199156 kubelet[2686]: I0508 00:40:39.198799 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-54fdd" podStartSLOduration=3.198794166 podStartE2EDuration="3.198794166s" podCreationTimestamp="2025-05-08 00:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:37.191015648 +0000 UTC m=+15.160409357" watchObservedRunningTime="2025-05-08 00:40:39.198794166 +0000 UTC m=+17.168187865" May 8 00:40:41.462037 kubelet[2686]: I0508 00:40:41.461346 2686 topology_manager.go:215] "Topology Admit Handler" podUID="50ccfeac-a3f7-4c2e-8595-379b8c619165" podNamespace="calico-system" podName="calico-typha-6b5867ffc8-wnmhq" May 8 00:40:41.477546 systemd[1]: Created slice kubepods-besteffort-pod50ccfeac_a3f7_4c2e_8595_379b8c619165.slice - libcontainer container kubepods-besteffort-pod50ccfeac_a3f7_4c2e_8595_379b8c619165.slice. May 8 00:40:41.532935 kubelet[2686]: I0508 00:40:41.532905 2686 topology_manager.go:215] "Topology Admit Handler" podUID="cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e" podNamespace="calico-system" podName="calico-node-5b6cj" May 8 00:40:41.542390 systemd[1]: Created slice kubepods-besteffort-podcd3da63a_09f2_42c5_ac82_b9cd28cd5b4e.slice - libcontainer container kubepods-besteffort-podcd3da63a_09f2_42c5_ac82_b9cd28cd5b4e.slice. May 8 00:40:41.551376 kubelet[2686]: I0508 00:40:41.551340 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ccfeac-a3f7-4c2e-8595-379b8c619165-tigera-ca-bundle\") pod \"calico-typha-6b5867ffc8-wnmhq\" (UID: \"50ccfeac-a3f7-4c2e-8595-379b8c619165\") " pod="calico-system/calico-typha-6b5867ffc8-wnmhq" May 8 00:40:41.551376 kubelet[2686]: I0508 00:40:41.551379 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-lib-modules\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551469 kubelet[2686]: I0508 00:40:41.551398 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-flexvol-driver-host\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551469 kubelet[2686]: I0508 00:40:41.551413 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6rp\" (UniqueName: \"kubernetes.io/projected/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-kube-api-access-9b6rp\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551469 kubelet[2686]: I0508 00:40:41.551427 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-var-run-calico\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551469 kubelet[2686]: I0508 00:40:41.551447 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kglb7\" (UniqueName: \"kubernetes.io/projected/50ccfeac-a3f7-4c2e-8595-379b8c619165-kube-api-access-kglb7\") pod \"calico-typha-6b5867ffc8-wnmhq\" (UID: \"50ccfeac-a3f7-4c2e-8595-379b8c619165\") " pod="calico-system/calico-typha-6b5867ffc8-wnmhq" May 8 00:40:41.551469 kubelet[2686]: I0508 00:40:41.551464 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-policysync\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551628 kubelet[2686]: I0508 00:40:41.551478 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-var-lib-calico\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551628 kubelet[2686]: I0508 00:40:41.551493 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-tigera-ca-bundle\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551628 kubelet[2686]: I0508 00:40:41.551510 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-node-certs\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551628 kubelet[2686]: I0508 00:40:41.551525 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-xtables-lock\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551628 kubelet[2686]: I0508 00:40:41.551540 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-cni-log-dir\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551733 kubelet[2686]: I0508 00:40:41.551555 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-cni-bin-dir\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.551733 kubelet[2686]: I0508 00:40:41.551570 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/50ccfeac-a3f7-4c2e-8595-379b8c619165-typha-certs\") pod \"calico-typha-6b5867ffc8-wnmhq\" (UID: \"50ccfeac-a3f7-4c2e-8595-379b8c619165\") " pod="calico-system/calico-typha-6b5867ffc8-wnmhq" May 8 00:40:41.551733 kubelet[2686]: I0508 00:40:41.551586 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e-cni-net-dir\") pod \"calico-node-5b6cj\" (UID: \"cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e\") " pod="calico-system/calico-node-5b6cj" May 8 00:40:41.666717 kubelet[2686]: E0508 00:40:41.666466 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.666717 kubelet[2686]: W0508 00:40:41.666484 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.666717 kubelet[2686]: E0508 00:40:41.666500 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.667393 kubelet[2686]: E0508 00:40:41.667362 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.667462 kubelet[2686]: W0508 00:40:41.667450 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.667645 kubelet[2686]: E0508 00:40:41.667550 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.668368 kubelet[2686]: E0508 00:40:41.668355 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.668555 kubelet[2686]: W0508 00:40:41.668430 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.668555 kubelet[2686]: E0508 00:40:41.668449 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.668820 kubelet[2686]: E0508 00:40:41.668785 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.668886 kubelet[2686]: W0508 00:40:41.668862 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.668994 kubelet[2686]: E0508 00:40:41.668948 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.672400 kubelet[2686]: E0508 00:40:41.672271 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.672400 kubelet[2686]: W0508 00:40:41.672285 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.672400 kubelet[2686]: E0508 00:40:41.672299 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.672784 kubelet[2686]: E0508 00:40:41.672771 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.672913 kubelet[2686]: W0508 00:40:41.672822 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.672913 kubelet[2686]: E0508 00:40:41.672835 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.675361 kubelet[2686]: E0508 00:40:41.675348 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.675517 kubelet[2686]: W0508 00:40:41.675407 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.675517 kubelet[2686]: E0508 00:40:41.675421 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.677176 kubelet[2686]: E0508 00:40:41.677128 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.678581 kubelet[2686]: W0508 00:40:41.678567 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.680747 kubelet[2686]: E0508 00:40:41.680713 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.682283 kubelet[2686]: E0508 00:40:41.682269 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.682392 kubelet[2686]: W0508 00:40:41.682357 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.682624 kubelet[2686]: E0508 00:40:41.682376 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.682723 kubelet[2686]: E0508 00:40:41.682710 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.682786 kubelet[2686]: W0508 00:40:41.682776 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.682872 kubelet[2686]: E0508 00:40:41.682860 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.685250 kubelet[2686]: E0508 00:40:41.683319 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.685250 kubelet[2686]: W0508 00:40:41.683340 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.685250 kubelet[2686]: E0508 00:40:41.683356 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.688770 kubelet[2686]: E0508 00:40:41.688750 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.688770 kubelet[2686]: W0508 00:40:41.688766 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.688865 kubelet[2686]: E0508 00:40:41.688778 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.699252 kubelet[2686]: E0508 00:40:41.697525 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.699252 kubelet[2686]: W0508 00:40:41.697539 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.699252 kubelet[2686]: E0508 00:40:41.697549 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.704452 kubelet[2686]: I0508 00:40:41.702765 2686 topology_manager.go:215] "Topology Admit Handler" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" podNamespace="calico-system" podName="csi-node-driver-q8q6q" May 8 00:40:41.704452 kubelet[2686]: E0508 00:40:41.702989 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:41.743115 kubelet[2686]: E0508 00:40:41.743012 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.743115 kubelet[2686]: W0508 00:40:41.743053 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.743115 kubelet[2686]: E0508 00:40:41.743068 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.744472 kubelet[2686]: E0508 00:40:41.744447 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.744472 kubelet[2686]: W0508 00:40:41.744466 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.744703 kubelet[2686]: E0508 00:40:41.744666 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.745727 kubelet[2686]: E0508 00:40:41.745688 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.745827 kubelet[2686]: W0508 00:40:41.745802 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.745827 kubelet[2686]: E0508 00:40:41.745821 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.747158 kubelet[2686]: E0508 00:40:41.747127 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.748436 kubelet[2686]: W0508 00:40:41.748394 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.748436 kubelet[2686]: E0508 00:40:41.748417 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.748880 kubelet[2686]: E0508 00:40:41.748842 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.748880 kubelet[2686]: W0508 00:40:41.748856 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.748941 kubelet[2686]: E0508 00:40:41.748865 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.749166 kubelet[2686]: E0508 00:40:41.749142 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.749166 kubelet[2686]: W0508 00:40:41.749157 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.749166 kubelet[2686]: E0508 00:40:41.749165 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.751234 kubelet[2686]: E0508 00:40:41.750013 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.751234 kubelet[2686]: W0508 00:40:41.750026 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.751234 kubelet[2686]: E0508 00:40:41.750035 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.751234 kubelet[2686]: E0508 00:40:41.750298 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.751234 kubelet[2686]: W0508 00:40:41.750306 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.751234 kubelet[2686]: E0508 00:40:41.750314 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.751234 kubelet[2686]: E0508 00:40:41.751120 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.751234 kubelet[2686]: W0508 00:40:41.751130 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.751234 kubelet[2686]: E0508 00:40:41.751140 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.751820 kubelet[2686]: E0508 00:40:41.751792 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.751865 kubelet[2686]: W0508 00:40:41.751811 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.751865 kubelet[2686]: E0508 00:40:41.751841 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.752369 kubelet[2686]: E0508 00:40:41.752342 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.752369 kubelet[2686]: W0508 00:40:41.752359 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.752369 kubelet[2686]: E0508 00:40:41.752367 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.752853 kubelet[2686]: E0508 00:40:41.752825 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.752853 kubelet[2686]: W0508 00:40:41.752840 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.752853 kubelet[2686]: E0508 00:40:41.752849 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.754322 kubelet[2686]: E0508 00:40:41.754289 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.754362 kubelet[2686]: W0508 00:40:41.754306 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.754362 kubelet[2686]: E0508 00:40:41.754336 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.754594 kubelet[2686]: E0508 00:40:41.754561 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.754594 kubelet[2686]: W0508 00:40:41.754576 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.754594 kubelet[2686]: E0508 00:40:41.754584 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.754817 kubelet[2686]: E0508 00:40:41.754786 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.754817 kubelet[2686]: W0508 00:40:41.754800 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.754817 kubelet[2686]: E0508 00:40:41.754808 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.755037 kubelet[2686]: E0508 00:40:41.755009 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.755037 kubelet[2686]: W0508 00:40:41.755028 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.755278 kubelet[2686]: E0508 00:40:41.755036 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.756010 kubelet[2686]: E0508 00:40:41.755983 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.756010 kubelet[2686]: W0508 00:40:41.755999 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.756010 kubelet[2686]: E0508 00:40:41.756008 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.756383 kubelet[2686]: E0508 00:40:41.756359 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.756383 kubelet[2686]: W0508 00:40:41.756375 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.756383 kubelet[2686]: E0508 00:40:41.756383 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.756917 kubelet[2686]: E0508 00:40:41.756896 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.756917 kubelet[2686]: W0508 00:40:41.756912 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.757314 kubelet[2686]: E0508 00:40:41.756921 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.757533 kubelet[2686]: E0508 00:40:41.757509 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.757533 kubelet[2686]: W0508 00:40:41.757524 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.757592 kubelet[2686]: E0508 00:40:41.757532 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.758386 kubelet[2686]: E0508 00:40:41.758360 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.758386 kubelet[2686]: W0508 00:40:41.758378 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.758386 kubelet[2686]: E0508 00:40:41.758387 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.758473 kubelet[2686]: I0508 00:40:41.758422 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae949d8a-9850-4b3f-b127-0cc79fb660b3-socket-dir\") pod \"csi-node-driver-q8q6q\" (UID: \"ae949d8a-9850-4b3f-b127-0cc79fb660b3\") " pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:41.759102 kubelet[2686]: E0508 00:40:41.759063 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.759102 kubelet[2686]: W0508 00:40:41.759082 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.759102 kubelet[2686]: E0508 00:40:41.759104 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.759191 kubelet[2686]: I0508 00:40:41.759120 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae949d8a-9850-4b3f-b127-0cc79fb660b3-kubelet-dir\") pod \"csi-node-driver-q8q6q\" (UID: \"ae949d8a-9850-4b3f-b127-0cc79fb660b3\") " pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:41.759422 kubelet[2686]: E0508 00:40:41.759394 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.759422 kubelet[2686]: W0508 00:40:41.759409 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.760278 kubelet[2686]: E0508 00:40:41.760248 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.760278 kubelet[2686]: I0508 00:40:41.760274 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae949d8a-9850-4b3f-b127-0cc79fb660b3-varrun\") pod \"csi-node-driver-q8q6q\" (UID: \"ae949d8a-9850-4b3f-b127-0cc79fb660b3\") " pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:41.760544 kubelet[2686]: E0508 00:40:41.760517 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.760544 kubelet[2686]: W0508 00:40:41.760534 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.760609 kubelet[2686]: E0508 00:40:41.760554 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.760769 kubelet[2686]: E0508 00:40:41.760747 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.760769 kubelet[2686]: W0508 00:40:41.760763 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.760884 kubelet[2686]: E0508 00:40:41.760861 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.761019 kubelet[2686]: E0508 00:40:41.760987 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.761019 kubelet[2686]: W0508 00:40:41.761001 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.761308 kubelet[2686]: E0508 00:40:41.761145 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.761474 kubelet[2686]: E0508 00:40:41.761446 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.761474 kubelet[2686]: W0508 00:40:41.761461 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.761474 kubelet[2686]: E0508 00:40:41.761473 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.761556 kubelet[2686]: I0508 00:40:41.761488 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrn6b\" (UniqueName: \"kubernetes.io/projected/ae949d8a-9850-4b3f-b127-0cc79fb660b3-kube-api-access-jrn6b\") pod \"csi-node-driver-q8q6q\" (UID: \"ae949d8a-9850-4b3f-b127-0cc79fb660b3\") " pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:41.763226 kubelet[2686]: E0508 00:40:41.762089 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.763226 kubelet[2686]: W0508 00:40:41.762102 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.763226 kubelet[2686]: E0508 00:40:41.762126 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.763226 kubelet[2686]: E0508 00:40:41.762349 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.763226 kubelet[2686]: W0508 00:40:41.762356 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.763226 kubelet[2686]: E0508 00:40:41.762364 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.763226 kubelet[2686]: E0508 00:40:41.762987 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.763226 kubelet[2686]: W0508 00:40:41.762996 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.763226 kubelet[2686]: E0508 00:40:41.763018 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.763428 kubelet[2686]: E0508 00:40:41.763404 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.763428 kubelet[2686]: W0508 00:40:41.763413 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.763466 kubelet[2686]: E0508 00:40:41.763433 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.763466 kubelet[2686]: I0508 00:40:41.763448 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae949d8a-9850-4b3f-b127-0cc79fb660b3-registration-dir\") pod \"csi-node-driver-q8q6q\" (UID: \"ae949d8a-9850-4b3f-b127-0cc79fb660b3\") " pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:41.764004 kubelet[2686]: E0508 00:40:41.763976 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.764004 kubelet[2686]: W0508 00:40:41.763992 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.764004 kubelet[2686]: E0508 00:40:41.764000 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.764530 kubelet[2686]: E0508 00:40:41.764502 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.764530 kubelet[2686]: W0508 00:40:41.764517 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.764587 kubelet[2686]: E0508 00:40:41.764537 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.764964 kubelet[2686]: E0508 00:40:41.764937 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.764964 kubelet[2686]: W0508 00:40:41.764952 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.764964 kubelet[2686]: E0508 00:40:41.764960 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.765463 kubelet[2686]: E0508 00:40:41.765437 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.765463 kubelet[2686]: W0508 00:40:41.765452 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.765463 kubelet[2686]: E0508 00:40:41.765461 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.789692 kubelet[2686]: E0508 00:40:41.789659 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:41.790401 containerd[1484]: time="2025-05-08T00:40:41.790349761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b5867ffc8-wnmhq,Uid:50ccfeac-a3f7-4c2e-8595-379b8c619165,Namespace:calico-system,Attempt:0,}" May 8 00:40:41.831217 containerd[1484]: time="2025-05-08T00:40:41.830843353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:41.832250 containerd[1484]: time="2025-05-08T00:40:41.830960105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:41.832250 containerd[1484]: time="2025-05-08T00:40:41.830977925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:41.832250 containerd[1484]: time="2025-05-08T00:40:41.831631289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:41.847654 kubelet[2686]: E0508 00:40:41.847603 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:41.848384 containerd[1484]: time="2025-05-08T00:40:41.848037603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5b6cj,Uid:cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e,Namespace:calico-system,Attempt:0,}" May 8 00:40:41.867314 kubelet[2686]: E0508 00:40:41.867279 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.867314 kubelet[2686]: W0508 00:40:41.867303 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.867412 kubelet[2686]: E0508 00:40:41.867322 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.867698 kubelet[2686]: E0508 00:40:41.867664 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.867698 kubelet[2686]: W0508 00:40:41.867690 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.867698 kubelet[2686]: E0508 00:40:41.867703 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.869195 systemd[1]: Started cri-containerd-05adc5091f1526eff8a8e4b6da5729a63b2b8887537c4a9735b3d7f6286f8b95.scope - libcontainer container 05adc5091f1526eff8a8e4b6da5729a63b2b8887537c4a9735b3d7f6286f8b95. May 8 00:40:41.869835 kubelet[2686]: E0508 00:40:41.869290 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.869835 kubelet[2686]: W0508 00:40:41.869300 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.869835 kubelet[2686]: E0508 00:40:41.869315 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.869958 kubelet[2686]: E0508 00:40:41.869857 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.870720 kubelet[2686]: W0508 00:40:41.869865 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.871062 kubelet[2686]: E0508 00:40:41.871037 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.871322 kubelet[2686]: E0508 00:40:41.871298 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.871457 kubelet[2686]: W0508 00:40:41.871437 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.871656 kubelet[2686]: E0508 00:40:41.871633 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.872004 kubelet[2686]: E0508 00:40:41.871981 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.872004 kubelet[2686]: W0508 00:40:41.871995 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.873230 kubelet[2686]: E0508 00:40:41.872249 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.873230 kubelet[2686]: E0508 00:40:41.872757 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.873230 kubelet[2686]: W0508 00:40:41.872766 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.873230 kubelet[2686]: E0508 00:40:41.872850 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.873230 kubelet[2686]: E0508 00:40:41.873068 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.873230 kubelet[2686]: W0508 00:40:41.873075 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.873230 kubelet[2686]: E0508 00:40:41.873133 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.873672 kubelet[2686]: E0508 00:40:41.873637 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.873672 kubelet[2686]: W0508 00:40:41.873652 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.874389 kubelet[2686]: E0508 00:40:41.874359 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.874715 kubelet[2686]: E0508 00:40:41.874689 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.874715 kubelet[2686]: W0508 00:40:41.874703 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.874885 kubelet[2686]: E0508 00:40:41.874860 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.875240 kubelet[2686]: E0508 00:40:41.875216 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.875356 kubelet[2686]: W0508 00:40:41.875333 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.875494 kubelet[2686]: E0508 00:40:41.875472 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.875782 kubelet[2686]: E0508 00:40:41.875757 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.875782 kubelet[2686]: W0508 00:40:41.875771 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.875924 kubelet[2686]: E0508 00:40:41.875900 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.876316 kubelet[2686]: E0508 00:40:41.876293 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.876316 kubelet[2686]: W0508 00:40:41.876307 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.876593 kubelet[2686]: E0508 00:40:41.876564 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.876918 kubelet[2686]: E0508 00:40:41.876895 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.876918 kubelet[2686]: W0508 00:40:41.876909 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.877146 kubelet[2686]: E0508 00:40:41.877123 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.877666 kubelet[2686]: E0508 00:40:41.877642 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.877666 kubelet[2686]: W0508 00:40:41.877657 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.877812 kubelet[2686]: E0508 00:40:41.877789 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.879113 kubelet[2686]: E0508 00:40:41.878367 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.879113 kubelet[2686]: W0508 00:40:41.878379 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.879113 kubelet[2686]: E0508 00:40:41.878664 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.879113 kubelet[2686]: E0508 00:40:41.878945 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.879113 kubelet[2686]: W0508 00:40:41.878952 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.879241 kubelet[2686]: E0508 00:40:41.879173 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.879565 kubelet[2686]: E0508 00:40:41.879541 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.879565 kubelet[2686]: W0508 00:40:41.879556 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.879922 kubelet[2686]: E0508 00:40:41.879900 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.880170 kubelet[2686]: E0508 00:40:41.880149 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.880170 kubelet[2686]: W0508 00:40:41.880163 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.880477 kubelet[2686]: E0508 00:40:41.880454 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.880979 kubelet[2686]: E0508 00:40:41.880955 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.881006 kubelet[2686]: W0508 00:40:41.880971 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.881259 kubelet[2686]: E0508 00:40:41.881237 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.882517 kubelet[2686]: E0508 00:40:41.882490 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.882517 kubelet[2686]: W0508 00:40:41.882506 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.882612 kubelet[2686]: E0508 00:40:41.882589 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.883393 kubelet[2686]: E0508 00:40:41.882732 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.883393 kubelet[2686]: W0508 00:40:41.882744 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.883393 kubelet[2686]: E0508 00:40:41.882821 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.883393 kubelet[2686]: E0508 00:40:41.882957 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.883393 kubelet[2686]: W0508 00:40:41.882964 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.883393 kubelet[2686]: E0508 00:40:41.883222 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.883596 kubelet[2686]: E0508 00:40:41.883568 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.883596 kubelet[2686]: W0508 00:40:41.883588 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.883687 kubelet[2686]: E0508 00:40:41.883665 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.884233 kubelet[2686]: E0508 00:40:41.884195 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.884281 kubelet[2686]: W0508 00:40:41.884243 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.884281 kubelet[2686]: E0508 00:40:41.884253 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.896327 kubelet[2686]: E0508 00:40:41.895386 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:41.896327 kubelet[2686]: W0508 00:40:41.895404 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:41.896327 kubelet[2686]: E0508 00:40:41.895414 2686 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:41.901525 containerd[1484]: time="2025-05-08T00:40:41.901280038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:41.902166 containerd[1484]: time="2025-05-08T00:40:41.902024343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:41.902166 containerd[1484]: time="2025-05-08T00:40:41.902039763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:41.902599 containerd[1484]: time="2025-05-08T00:40:41.902354110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:41.926623 systemd[1]: Started cri-containerd-3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9.scope - libcontainer container 3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9. May 8 00:40:41.975975 containerd[1484]: time="2025-05-08T00:40:41.975928557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5b6cj,Uid:cd3da63a-09f2-42c5-ac82-b9cd28cd5b4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9\"" May 8 00:40:41.977144 kubelet[2686]: E0508 00:40:41.977111 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:41.978888 containerd[1484]: time="2025-05-08T00:40:41.978834225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:40:42.011323 containerd[1484]: time="2025-05-08T00:40:42.010176234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b5867ffc8-wnmhq,Uid:50ccfeac-a3f7-4c2e-8595-379b8c619165,Namespace:calico-system,Attempt:0,} returns sandbox id \"05adc5091f1526eff8a8e4b6da5729a63b2b8887537c4a9735b3d7f6286f8b95\"" May 8 00:40:42.014272 kubelet[2686]: E0508 00:40:42.013783 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:42.562425 containerd[1484]: time="2025-05-08T00:40:42.562368907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:42.563126 containerd[1484]: time="2025-05-08T00:40:42.563060679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:40:42.564243 containerd[1484]: time="2025-05-08T00:40:42.563599510Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:42.565976 containerd[1484]: time="2025-05-08T00:40:42.565187399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:42.565976 containerd[1484]: time="2025-05-08T00:40:42.565880352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 586.967865ms" May 8 00:40:42.565976 containerd[1484]: time="2025-05-08T00:40:42.565904433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:40:42.566842 containerd[1484]: time="2025-05-08T00:40:42.566822669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:40:42.569633 containerd[1484]: time="2025-05-08T00:40:42.569599841Z" level=info msg="CreateContainer within sandbox \"3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:40:42.594973 containerd[1484]: time="2025-05-08T00:40:42.594931850Z" level=info msg="CreateContainer within sandbox \"3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d\"" May 8 00:40:42.596053 containerd[1484]: time="2025-05-08T00:40:42.595383868Z" level=info msg="StartContainer for \"9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d\"" May 8 00:40:42.620366 systemd[1]: Started cri-containerd-9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d.scope - libcontainer container 9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d. May 8 00:40:42.658851 containerd[1484]: time="2025-05-08T00:40:42.658792992Z" level=info msg="StartContainer for \"9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d\" returns successfully" May 8 00:40:42.681044 systemd[1]: cri-containerd-9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d.scope: Deactivated successfully. May 8 00:40:42.726602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d-rootfs.mount: Deactivated successfully. May 8 00:40:42.762238 containerd[1484]: time="2025-05-08T00:40:42.762120044Z" level=info msg="shim disconnected" id=9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d namespace=k8s.io May 8 00:40:42.762238 containerd[1484]: time="2025-05-08T00:40:42.762192096Z" level=warning msg="cleaning up after shim disconnected" id=9c698cd559ccfe71acbf96dc58b5785279aa7370a82f5889974e772a515f506d namespace=k8s.io May 8 00:40:42.762595 containerd[1484]: time="2025-05-08T00:40:42.762200586Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:43.136014 kubelet[2686]: E0508 00:40:43.135704 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:43.197619 kubelet[2686]: E0508 00:40:43.197341 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:43.661036 containerd[1484]: time="2025-05-08T00:40:43.660994259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:43.661859 containerd[1484]: time="2025-05-08T00:40:43.661828755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:40:43.662935 containerd[1484]: time="2025-05-08T00:40:43.662903543Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:43.664782 containerd[1484]: time="2025-05-08T00:40:43.664749175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:43.665382 containerd[1484]: time="2025-05-08T00:40:43.665349175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.098439934s" May 8 00:40:43.665456 containerd[1484]: time="2025-05-08T00:40:43.665441707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:40:43.666562 containerd[1484]: time="2025-05-08T00:40:43.666530586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:40:43.680550 containerd[1484]: time="2025-05-08T00:40:43.680525618Z" level=info msg="CreateContainer within sandbox \"05adc5091f1526eff8a8e4b6da5729a63b2b8887537c4a9735b3d7f6286f8b95\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:40:43.690573 containerd[1484]: time="2025-05-08T00:40:43.690472750Z" level=info msg="CreateContainer within sandbox \"05adc5091f1526eff8a8e4b6da5729a63b2b8887537c4a9735b3d7f6286f8b95\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7783794c5ef779066b27cd5d4201cc614c061961791de9642b6279981ae1960\"" May 8 00:40:43.692007 containerd[1484]: time="2025-05-08T00:40:43.691404846Z" level=info msg="StartContainer for \"c7783794c5ef779066b27cd5d4201cc614c061961791de9642b6279981ae1960\"" May 8 00:40:43.726374 systemd[1]: Started cri-containerd-c7783794c5ef779066b27cd5d4201cc614c061961791de9642b6279981ae1960.scope - libcontainer container c7783794c5ef779066b27cd5d4201cc614c061961791de9642b6279981ae1960. May 8 00:40:43.773158 containerd[1484]: time="2025-05-08T00:40:43.773090198Z" level=info msg="StartContainer for \"c7783794c5ef779066b27cd5d4201cc614c061961791de9642b6279981ae1960\" returns successfully" May 8 00:40:44.199080 kubelet[2686]: E0508 00:40:44.199032 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:44.672573 systemd[1]: run-containerd-runc-k8s.io-c7783794c5ef779066b27cd5d4201cc614c061961791de9642b6279981ae1960-runc.gU4SNJ.mount: Deactivated successfully. May 8 00:40:45.137492 kubelet[2686]: E0508 00:40:45.136899 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:45.201963 kubelet[2686]: I0508 00:40:45.201083 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:45.201963 kubelet[2686]: E0508 00:40:45.201520 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:45.596545 containerd[1484]: time="2025-05-08T00:40:45.596492737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:45.597291 containerd[1484]: time="2025-05-08T00:40:45.597082206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:40:45.598986 containerd[1484]: time="2025-05-08T00:40:45.597769677Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:45.600237 containerd[1484]: time="2025-05-08T00:40:45.599507753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:45.600237 containerd[1484]: time="2025-05-08T00:40:45.600129112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 1.933569036s" May 8 00:40:45.600237 containerd[1484]: time="2025-05-08T00:40:45.600156862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:40:45.602923 containerd[1484]: time="2025-05-08T00:40:45.602898664Z" level=info msg="CreateContainer within sandbox \"3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:40:45.621373 containerd[1484]: time="2025-05-08T00:40:45.621339742Z" level=info msg="CreateContainer within sandbox \"3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920\"" May 8 00:40:45.622380 containerd[1484]: time="2025-05-08T00:40:45.622351577Z" level=info msg="StartContainer for \"4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920\"" May 8 00:40:45.665371 systemd[1]: Started cri-containerd-4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920.scope - libcontainer container 4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920. May 8 00:40:45.671374 systemd[1]: run-containerd-runc-k8s.io-4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920-runc.KpdSqQ.mount: Deactivated successfully. May 8 00:40:45.695623 containerd[1484]: time="2025-05-08T00:40:45.695532771Z" level=info msg="StartContainer for \"4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920\" returns successfully" May 8 00:40:46.174828 systemd[1]: cri-containerd-4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920.scope: Deactivated successfully. May 8 00:40:46.175451 systemd[1]: cri-containerd-4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920.scope: Consumed 470ms CPU time, 173.1M memory peak, 154M written to disk. May 8 00:40:46.197637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920-rootfs.mount: Deactivated successfully. May 8 00:40:46.213570 kubelet[2686]: E0508 00:40:46.213519 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:46.241671 kubelet[2686]: I0508 00:40:46.241387 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b5867ffc8-wnmhq" podStartSLOduration=3.590415787 podStartE2EDuration="5.241372349s" podCreationTimestamp="2025-05-08 00:40:41 +0000 UTC" firstStartedPulling="2025-05-08 00:40:42.015188757 +0000 UTC m=+19.984582456" lastFinishedPulling="2025-05-08 00:40:43.666145319 +0000 UTC m=+21.635539018" observedRunningTime="2025-05-08 00:40:44.207790839 +0000 UTC m=+22.177184548" watchObservedRunningTime="2025-05-08 00:40:46.241372349 +0000 UTC m=+24.210766048" May 8 00:40:46.257468 kubelet[2686]: I0508 00:40:46.256256 2686 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:40:46.285251 containerd[1484]: time="2025-05-08T00:40:46.285019693Z" level=info msg="shim disconnected" id=4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920 namespace=k8s.io May 8 00:40:46.285251 containerd[1484]: time="2025-05-08T00:40:46.285068364Z" level=warning msg="cleaning up after shim disconnected" id=4853188483b8ef0692357c414ccf1533b65689739856285913f4169a8970c920 namespace=k8s.io May 8 00:40:46.285251 containerd[1484]: time="2025-05-08T00:40:46.285076394Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:46.289406 kubelet[2686]: I0508 00:40:46.289371 2686 topology_manager.go:215] "Topology Admit Handler" podUID="1d395b40-74ec-4d21-9505-050a6c6424b9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t5sjv" May 8 00:40:46.292281 kubelet[2686]: I0508 00:40:46.290813 2686 topology_manager.go:215] "Topology Admit Handler" podUID="b968d45f-0186-4bf1-af0b-3789d578367b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ndt8s" May 8 00:40:46.294473 kubelet[2686]: I0508 00:40:46.293889 2686 topology_manager.go:215] "Topology Admit Handler" podUID="210d2f6d-8bde-4f98-93d8-48808afe079f" podNamespace="calico-system" podName="calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:46.296012 kubelet[2686]: I0508 00:40:46.295633 2686 topology_manager.go:215] "Topology Admit Handler" podUID="e43f4851-92c0-4238-8905-f3f57d62dc20" podNamespace="calico-apiserver" podName="calico-apiserver-95f5468f8-vgm4z" May 8 00:40:46.300308 kubelet[2686]: W0508 00:40:46.297085 2686 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-237-145-97" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-145-97' and this object May 8 00:40:46.300374 kubelet[2686]: E0508 00:40:46.300320 2686 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-237-145-97" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-145-97' and this object May 8 00:40:46.300623 kubelet[2686]: I0508 00:40:46.300566 2686 topology_manager.go:215] "Topology Admit Handler" podUID="a2974be7-7581-4fce-a16e-15f650ba010f" podNamespace="calico-apiserver" podName="calico-apiserver-95f5468f8-zknsp" May 8 00:40:46.308898 kubelet[2686]: I0508 00:40:46.303465 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210d2f6d-8bde-4f98-93d8-48808afe079f-tigera-ca-bundle\") pod \"calico-kube-controllers-66457cb4b-4cpwk\" (UID: \"210d2f6d-8bde-4f98-93d8-48808afe079f\") " pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:46.308953 kubelet[2686]: I0508 00:40:46.308916 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e43f4851-92c0-4238-8905-f3f57d62dc20-calico-apiserver-certs\") pod \"calico-apiserver-95f5468f8-vgm4z\" (UID: \"e43f4851-92c0-4238-8905-f3f57d62dc20\") " pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:46.308953 kubelet[2686]: I0508 00:40:46.308938 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7gbj\" (UniqueName: \"kubernetes.io/projected/210d2f6d-8bde-4f98-93d8-48808afe079f-kube-api-access-g7gbj\") pod \"calico-kube-controllers-66457cb4b-4cpwk\" (UID: \"210d2f6d-8bde-4f98-93d8-48808afe079f\") " pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:46.309012 kubelet[2686]: I0508 00:40:46.308956 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b968d45f-0186-4bf1-af0b-3789d578367b-config-volume\") pod \"coredns-7db6d8ff4d-ndt8s\" (UID: \"b968d45f-0186-4bf1-af0b-3789d578367b\") " pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:46.309012 kubelet[2686]: I0508 00:40:46.308974 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jls9d\" (UniqueName: \"kubernetes.io/projected/b968d45f-0186-4bf1-af0b-3789d578367b-kube-api-access-jls9d\") pod \"coredns-7db6d8ff4d-ndt8s\" (UID: \"b968d45f-0186-4bf1-af0b-3789d578367b\") " pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:46.309045 systemd[1]: Created slice kubepods-burstable-pod1d395b40_74ec_4d21_9505_050a6c6424b9.slice - libcontainer container kubepods-burstable-pod1d395b40_74ec_4d21_9505_050a6c6424b9.slice. May 8 00:40:46.310331 kubelet[2686]: I0508 00:40:46.310304 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqcl\" (UniqueName: \"kubernetes.io/projected/e43f4851-92c0-4238-8905-f3f57d62dc20-kube-api-access-zbqcl\") pod \"calico-apiserver-95f5468f8-vgm4z\" (UID: \"e43f4851-92c0-4238-8905-f3f57d62dc20\") " pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:46.313927 kubelet[2686]: I0508 00:40:46.313440 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d395b40-74ec-4d21-9505-050a6c6424b9-config-volume\") pod \"coredns-7db6d8ff4d-t5sjv\" (UID: \"1d395b40-74ec-4d21-9505-050a6c6424b9\") " pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:46.313927 kubelet[2686]: I0508 00:40:46.313467 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwb2\" (UniqueName: \"kubernetes.io/projected/1d395b40-74ec-4d21-9505-050a6c6424b9-kube-api-access-llwb2\") pod \"coredns-7db6d8ff4d-t5sjv\" (UID: \"1d395b40-74ec-4d21-9505-050a6c6424b9\") " pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:46.320389 systemd[1]: Created slice kubepods-burstable-podb968d45f_0186_4bf1_af0b_3789d578367b.slice - libcontainer container kubepods-burstable-podb968d45f_0186_4bf1_af0b_3789d578367b.slice. May 8 00:40:46.329250 systemd[1]: Created slice kubepods-besteffort-pod210d2f6d_8bde_4f98_93d8_48808afe079f.slice - libcontainer container kubepods-besteffort-pod210d2f6d_8bde_4f98_93d8_48808afe079f.slice. May 8 00:40:46.340720 systemd[1]: Created slice kubepods-besteffort-pode43f4851_92c0_4238_8905_f3f57d62dc20.slice - libcontainer container kubepods-besteffort-pode43f4851_92c0_4238_8905_f3f57d62dc20.slice. May 8 00:40:46.347092 systemd[1]: Created slice kubepods-besteffort-poda2974be7_7581_4fce_a16e_15f650ba010f.slice - libcontainer container kubepods-besteffort-poda2974be7_7581_4fce_a16e_15f650ba010f.slice. May 8 00:40:46.414228 kubelet[2686]: I0508 00:40:46.413682 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqwlp\" (UniqueName: \"kubernetes.io/projected/a2974be7-7581-4fce-a16e-15f650ba010f-kube-api-access-lqwlp\") pod \"calico-apiserver-95f5468f8-zknsp\" (UID: \"a2974be7-7581-4fce-a16e-15f650ba010f\") " pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:46.414228 kubelet[2686]: I0508 00:40:46.413757 2686 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a2974be7-7581-4fce-a16e-15f650ba010f-calico-apiserver-certs\") pod \"calico-apiserver-95f5468f8-zknsp\" (UID: \"a2974be7-7581-4fce-a16e-15f650ba010f\") " pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:46.635295 containerd[1484]: time="2025-05-08T00:40:46.635120280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:0,}" May 8 00:40:46.644713 containerd[1484]: time="2025-05-08T00:40:46.644653435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:46.653418 containerd[1484]: time="2025-05-08T00:40:46.653373267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:46.721097 containerd[1484]: time="2025-05-08T00:40:46.721046629Z" level=error msg="Failed to destroy network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.724222 containerd[1484]: time="2025-05-08T00:40:46.722339278Z" level=error msg="encountered an error cleaning up failed sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.724222 containerd[1484]: time="2025-05-08T00:40:46.722421689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.724354 kubelet[2686]: E0508 00:40:46.722631 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.724354 kubelet[2686]: E0508 00:40:46.722698 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:46.724354 kubelet[2686]: E0508 00:40:46.722719 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:46.724441 kubelet[2686]: E0508 00:40:46.722761 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" podUID="210d2f6d-8bde-4f98-93d8-48808afe079f" May 8 00:40:46.724983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51-shm.mount: Deactivated successfully. May 8 00:40:46.767086 containerd[1484]: time="2025-05-08T00:40:46.767025216Z" level=error msg="Failed to destroy network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.769413 containerd[1484]: time="2025-05-08T00:40:46.769181767Z" level=error msg="Failed to destroy network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.769448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159-shm.mount: Deactivated successfully. May 8 00:40:46.770073 containerd[1484]: time="2025-05-08T00:40:46.769992248Z" level=error msg="encountered an error cleaning up failed sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.770073 containerd[1484]: time="2025-05-08T00:40:46.770057309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.770443 kubelet[2686]: E0508 00:40:46.770269 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.770443 kubelet[2686]: E0508 00:40:46.770343 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:46.770443 kubelet[2686]: E0508 00:40:46.770369 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:46.770710 containerd[1484]: time="2025-05-08T00:40:46.770291572Z" level=error msg="encountered an error cleaning up failed sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.770710 containerd[1484]: time="2025-05-08T00:40:46.770378664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.771226 kubelet[2686]: E0508 00:40:46.770406 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" podUID="e43f4851-92c0-4238-8905-f3f57d62dc20" May 8 00:40:46.771226 kubelet[2686]: E0508 00:40:46.770765 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:46.771226 kubelet[2686]: E0508 00:40:46.770790 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:46.771325 kubelet[2686]: E0508 00:40:46.770805 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:46.771325 kubelet[2686]: E0508 00:40:46.770850 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" podUID="a2974be7-7581-4fce-a16e-15f650ba010f" May 8 00:40:46.773498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5-shm.mount: Deactivated successfully. May 8 00:40:47.142668 systemd[1]: Created slice kubepods-besteffort-podae949d8a_9850_4b3f_b127_0cc79fb660b3.slice - libcontainer container kubepods-besteffort-podae949d8a_9850_4b3f_b127_0cc79fb660b3.slice. May 8 00:40:47.144648 containerd[1484]: time="2025-05-08T00:40:47.144618885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:0,}" May 8 00:40:47.208805 containerd[1484]: time="2025-05-08T00:40:47.208746627Z" level=error msg="Failed to destroy network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.209186 containerd[1484]: time="2025-05-08T00:40:47.209161552Z" level=error msg="encountered an error cleaning up failed sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.209251 containerd[1484]: time="2025-05-08T00:40:47.209236553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.209823 kubelet[2686]: E0508 00:40:47.209464 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.209823 kubelet[2686]: E0508 00:40:47.209526 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:47.209823 kubelet[2686]: E0508 00:40:47.209548 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:47.209930 kubelet[2686]: E0508 00:40:47.209590 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:47.215224 kubelet[2686]: I0508 00:40:47.215161 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51" May 8 00:40:47.217960 containerd[1484]: time="2025-05-08T00:40:47.216168504Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:40:47.217960 containerd[1484]: time="2025-05-08T00:40:47.216456648Z" level=info msg="Ensure that sandbox 175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51 in task-service has been cleanup successfully" May 8 00:40:47.218032 kubelet[2686]: I0508 00:40:47.217664 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512" May 8 00:40:47.218546 containerd[1484]: time="2025-05-08T00:40:47.218219371Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:40:47.218546 containerd[1484]: time="2025-05-08T00:40:47.218287732Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:40:47.218546 containerd[1484]: time="2025-05-08T00:40:47.218303972Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:40:47.218546 containerd[1484]: time="2025-05-08T00:40:47.218424414Z" level=info msg="Ensure that sandbox a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512 in task-service has been cleanup successfully" May 8 00:40:47.219564 containerd[1484]: time="2025-05-08T00:40:47.219293445Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:40:47.219564 containerd[1484]: time="2025-05-08T00:40:47.219312236Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:40:47.221081 containerd[1484]: time="2025-05-08T00:40:47.220591573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:1,}" May 8 00:40:47.221980 containerd[1484]: time="2025-05-08T00:40:47.221950521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:1,}" May 8 00:40:47.222434 kubelet[2686]: I0508 00:40:47.222395 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5" May 8 00:40:47.223080 containerd[1484]: time="2025-05-08T00:40:47.223057355Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:40:47.223629 containerd[1484]: time="2025-05-08T00:40:47.223406420Z" level=info msg="Ensure that sandbox c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5 in task-service has been cleanup successfully" May 8 00:40:47.223806 containerd[1484]: time="2025-05-08T00:40:47.223776205Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:40:47.224609 containerd[1484]: time="2025-05-08T00:40:47.223798585Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:40:47.225984 kubelet[2686]: I0508 00:40:47.225657 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159" May 8 00:40:47.227717 containerd[1484]: time="2025-05-08T00:40:47.226378589Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:40:47.228466 containerd[1484]: time="2025-05-08T00:40:47.228343974Z" level=info msg="Ensure that sandbox 49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159 in task-service has been cleanup successfully" May 8 00:40:47.228751 containerd[1484]: time="2025-05-08T00:40:47.228707659Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:40:47.228807 containerd[1484]: time="2025-05-08T00:40:47.228793061Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:40:47.228877 containerd[1484]: time="2025-05-08T00:40:47.226433199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:47.229708 containerd[1484]: time="2025-05-08T00:40:47.229464609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:47.231001 kubelet[2686]: E0508 00:40:47.230976 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:47.233188 containerd[1484]: time="2025-05-08T00:40:47.233079636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:40:47.356055 containerd[1484]: time="2025-05-08T00:40:47.355703947Z" level=error msg="Failed to destroy network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.357038 containerd[1484]: time="2025-05-08T00:40:47.357011334Z" level=error msg="encountered an error cleaning up failed sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.357458 containerd[1484]: time="2025-05-08T00:40:47.357436660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.359125 kubelet[2686]: E0508 00:40:47.357956 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.359125 kubelet[2686]: E0508 00:40:47.358008 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:47.359125 kubelet[2686]: E0508 00:40:47.358027 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:47.359293 kubelet[2686]: E0508 00:40:47.358066 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:47.365065 containerd[1484]: time="2025-05-08T00:40:47.364982898Z" level=error msg="Failed to destroy network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.365549 containerd[1484]: time="2025-05-08T00:40:47.365518366Z" level=error msg="encountered an error cleaning up failed sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.365644 containerd[1484]: time="2025-05-08T00:40:47.365624137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.366510 kubelet[2686]: E0508 00:40:47.365890 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.366510 kubelet[2686]: E0508 00:40:47.365929 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:47.366510 kubelet[2686]: E0508 00:40:47.365946 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:47.366610 kubelet[2686]: E0508 00:40:47.365975 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" podUID="e43f4851-92c0-4238-8905-f3f57d62dc20" May 8 00:40:47.368837 containerd[1484]: time="2025-05-08T00:40:47.368788829Z" level=error msg="Failed to destroy network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.369189 containerd[1484]: time="2025-05-08T00:40:47.369144243Z" level=error msg="encountered an error cleaning up failed sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.370277 containerd[1484]: time="2025-05-08T00:40:47.369232764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.370748 kubelet[2686]: E0508 00:40:47.370416 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.370748 kubelet[2686]: E0508 00:40:47.370450 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:47.370748 kubelet[2686]: E0508 00:40:47.370465 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:47.370829 kubelet[2686]: E0508 00:40:47.370491 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" podUID="a2974be7-7581-4fce-a16e-15f650ba010f" May 8 00:40:47.371353 containerd[1484]: time="2025-05-08T00:40:47.371293791Z" level=error msg="Failed to destroy network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.371633 containerd[1484]: time="2025-05-08T00:40:47.371600016Z" level=error msg="encountered an error cleaning up failed sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.372104 containerd[1484]: time="2025-05-08T00:40:47.371646666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.372173 kubelet[2686]: E0508 00:40:47.371962 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:47.372173 kubelet[2686]: E0508 00:40:47.371988 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:47.372173 kubelet[2686]: E0508 00:40:47.372007 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:47.372271 kubelet[2686]: E0508 00:40:47.372037 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" podUID="210d2f6d-8bde-4f98-93d8-48808afe079f" May 8 00:40:47.415413 kubelet[2686]: E0508 00:40:47.415162 2686 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 8 00:40:47.415413 kubelet[2686]: E0508 00:40:47.415172 2686 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 8 00:40:47.415413 kubelet[2686]: E0508 00:40:47.415267 2686 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b968d45f-0186-4bf1-af0b-3789d578367b-config-volume podName:b968d45f-0186-4bf1-af0b-3789d578367b nodeName:}" failed. No retries permitted until 2025-05-08 00:40:47.915239368 +0000 UTC m=+25.884633077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b968d45f-0186-4bf1-af0b-3789d578367b-config-volume") pod "coredns-7db6d8ff4d-ndt8s" (UID: "b968d45f-0186-4bf1-af0b-3789d578367b") : failed to sync configmap cache: timed out waiting for the condition May 8 00:40:47.415413 kubelet[2686]: E0508 00:40:47.415285 2686 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d395b40-74ec-4d21-9505-050a6c6424b9-config-volume podName:1d395b40-74ec-4d21-9505-050a6c6424b9 nodeName:}" failed. No retries permitted until 2025-05-08 00:40:47.915278099 +0000 UTC m=+25.884671798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1d395b40-74ec-4d21-9505-050a6c6424b9-config-volume") pod "coredns-7db6d8ff4d-t5sjv" (UID: "1d395b40-74ec-4d21-9505-050a6c6424b9") : failed to sync configmap cache: timed out waiting for the condition May 8 00:40:47.673584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512-shm.mount: Deactivated successfully. May 8 00:40:47.673712 systemd[1]: run-netns-cni\x2dda5d498d\x2d0ef6\x2d88b4\x2d94d5\x2d785df31f7cce.mount: Deactivated successfully. May 8 00:40:47.673792 systemd[1]: run-netns-cni\x2d8b6e9ef4\x2db94c\x2dcf4b\x2d8940\x2d7da283cacb51.mount: Deactivated successfully. May 8 00:40:47.673862 systemd[1]: run-netns-cni\x2d7d747f82\x2dc0c7\x2d0544\x2d5b5c\x2d230f347050e7.mount: Deactivated successfully. May 8 00:40:48.114296 kubelet[2686]: E0508 00:40:48.113849 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:48.116639 containerd[1484]: time="2025-05-08T00:40:48.115310720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:0,}" May 8 00:40:48.128267 kubelet[2686]: E0508 00:40:48.128225 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:48.130330 containerd[1484]: time="2025-05-08T00:40:48.130055750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:0,}" May 8 00:40:48.234000 kubelet[2686]: I0508 00:40:48.233970 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8" May 8 00:40:48.235688 containerd[1484]: time="2025-05-08T00:40:48.235467671Z" level=error msg="Failed to destroy network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.236879 containerd[1484]: time="2025-05-08T00:40:48.236854639Z" level=error msg="Failed to destroy network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.238681 kubelet[2686]: I0508 00:40:48.238289 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2" May 8 00:40:48.241516 containerd[1484]: time="2025-05-08T00:40:48.237158212Z" level=error msg="encountered an error cleaning up failed sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.241516 containerd[1484]: time="2025-05-08T00:40:48.241491615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.241516 containerd[1484]: time="2025-05-08T00:40:48.237331114Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:40:48.241722 containerd[1484]: time="2025-05-08T00:40:48.241695588Z" level=info msg="Ensure that sandbox 0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8 in task-service has been cleanup successfully" May 8 00:40:48.241940 containerd[1484]: time="2025-05-08T00:40:48.241916051Z" level=error msg="encountered an error cleaning up failed sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.242086 containerd[1484]: time="2025-05-08T00:40:48.242022491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.242158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8-shm.mount: Deactivated successfully. May 8 00:40:48.242924 kubelet[2686]: E0508 00:40:48.242665 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.242924 kubelet[2686]: E0508 00:40:48.242701 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:48.242924 kubelet[2686]: E0508 00:40:48.242720 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:48.243002 kubelet[2686]: E0508 00:40:48.242748 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t5sjv" podUID="1d395b40-74ec-4d21-9505-050a6c6424b9" May 8 00:40:48.243002 kubelet[2686]: E0508 00:40:48.242835 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.243002 kubelet[2686]: E0508 00:40:48.242854 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:48.243139 kubelet[2686]: E0508 00:40:48.242867 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:48.243139 kubelet[2686]: E0508 00:40:48.242897 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ndt8s" podUID="b968d45f-0186-4bf1-af0b-3789d578367b" May 8 00:40:48.245931 containerd[1484]: time="2025-05-08T00:40:48.243306577Z" level=info msg="TearDown network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" successfully" May 8 00:40:48.245931 containerd[1484]: time="2025-05-08T00:40:48.245901250Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" returns successfully" May 8 00:40:48.246031 containerd[1484]: time="2025-05-08T00:40:48.238880304Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:40:48.246188 containerd[1484]: time="2025-05-08T00:40:48.246158272Z" level=info msg="Ensure that sandbox 4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2 in task-service has been cleanup successfully" May 8 00:40:48.246441 containerd[1484]: time="2025-05-08T00:40:48.246410595Z" level=info msg="TearDown network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" successfully" May 8 00:40:48.246441 containerd[1484]: time="2025-05-08T00:40:48.246431466Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" returns successfully" May 8 00:40:48.247870 containerd[1484]: time="2025-05-08T00:40:48.247838203Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:40:48.248085 containerd[1484]: time="2025-05-08T00:40:48.248020635Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:40:48.248200 containerd[1484]: time="2025-05-08T00:40:48.248177827Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:40:48.249183 kubelet[2686]: I0508 00:40:48.249139 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b" May 8 00:40:48.249385 containerd[1484]: time="2025-05-08T00:40:48.249365621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:2,}" May 8 00:40:48.253188 containerd[1484]: time="2025-05-08T00:40:48.250181851Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:40:48.253984 containerd[1484]: time="2025-05-08T00:40:48.253763746Z" level=info msg="Ensure that sandbox 0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b in task-service has been cleanup successfully" May 8 00:40:48.253984 containerd[1484]: time="2025-05-08T00:40:48.250332583Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:40:48.254229 containerd[1484]: time="2025-05-08T00:40:48.254150120Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:40:48.254395 containerd[1484]: time="2025-05-08T00:40:48.254162420Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:40:48.256483 containerd[1484]: time="2025-05-08T00:40:48.255401245Z" level=info msg="TearDown network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" successfully" May 8 00:40:48.256483 containerd[1484]: time="2025-05-08T00:40:48.255461416Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" returns successfully" May 8 00:40:48.258425 kubelet[2686]: I0508 00:40:48.258259 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617" May 8 00:40:48.258967 containerd[1484]: time="2025-05-08T00:40:48.258533294Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:40:48.258967 containerd[1484]: time="2025-05-08T00:40:48.258624025Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:40:48.258967 containerd[1484]: time="2025-05-08T00:40:48.258633865Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:40:48.258967 containerd[1484]: time="2025-05-08T00:40:48.258700756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:2,}" May 8 00:40:48.260438 containerd[1484]: time="2025-05-08T00:40:48.260393227Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:40:48.260707 containerd[1484]: time="2025-05-08T00:40:48.260690710Z" level=info msg="Ensure that sandbox fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617 in task-service has been cleanup successfully" May 8 00:40:48.262303 containerd[1484]: time="2025-05-08T00:40:48.262265819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:2,}" May 8 00:40:48.263151 containerd[1484]: time="2025-05-08T00:40:48.263117640Z" level=info msg="TearDown network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" successfully" May 8 00:40:48.263969 containerd[1484]: time="2025-05-08T00:40:48.263937031Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" returns successfully" May 8 00:40:48.265239 containerd[1484]: time="2025-05-08T00:40:48.265079314Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:40:48.265239 containerd[1484]: time="2025-05-08T00:40:48.265161495Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:40:48.265239 containerd[1484]: time="2025-05-08T00:40:48.265171565Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:40:48.266603 containerd[1484]: time="2025-05-08T00:40:48.266583133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:2,}" May 8 00:40:48.411865 containerd[1484]: time="2025-05-08T00:40:48.411829132Z" level=error msg="Failed to destroy network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.413237 containerd[1484]: time="2025-05-08T00:40:48.413166107Z" level=error msg="encountered an error cleaning up failed sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.413934 containerd[1484]: time="2025-05-08T00:40:48.413427761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.414012 kubelet[2686]: E0508 00:40:48.413647 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.414012 kubelet[2686]: E0508 00:40:48.413705 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:48.414012 kubelet[2686]: E0508 00:40:48.413728 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:48.414104 kubelet[2686]: E0508 00:40:48.413767 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:48.414368 containerd[1484]: time="2025-05-08T00:40:48.414174390Z" level=error msg="Failed to destroy network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.416225 containerd[1484]: time="2025-05-08T00:40:48.415465346Z" level=error msg="encountered an error cleaning up failed sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.416225 containerd[1484]: time="2025-05-08T00:40:48.415530587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.416592 kubelet[2686]: E0508 00:40:48.416399 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.416592 kubelet[2686]: E0508 00:40:48.416431 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:48.416592 kubelet[2686]: E0508 00:40:48.416494 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:48.416700 kubelet[2686]: E0508 00:40:48.416524 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" podUID="a2974be7-7581-4fce-a16e-15f650ba010f" May 8 00:40:48.417592 containerd[1484]: time="2025-05-08T00:40:48.417569302Z" level=error msg="Failed to destroy network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.418024 containerd[1484]: time="2025-05-08T00:40:48.417976756Z" level=error msg="encountered an error cleaning up failed sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.418838 containerd[1484]: time="2025-05-08T00:40:48.418816527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.419368 kubelet[2686]: E0508 00:40:48.419260 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.419368 kubelet[2686]: E0508 00:40:48.419288 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:48.419368 kubelet[2686]: E0508 00:40:48.419303 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:48.419469 kubelet[2686]: E0508 00:40:48.419327 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" podUID="210d2f6d-8bde-4f98-93d8-48808afe079f" May 8 00:40:48.427823 containerd[1484]: time="2025-05-08T00:40:48.427763937Z" level=error msg="Failed to destroy network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.428168 containerd[1484]: time="2025-05-08T00:40:48.428068920Z" level=error msg="encountered an error cleaning up failed sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.428256 containerd[1484]: time="2025-05-08T00:40:48.428151941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.428450 kubelet[2686]: E0508 00:40:48.428409 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:48.428495 kubelet[2686]: E0508 00:40:48.428478 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:48.428531 kubelet[2686]: E0508 00:40:48.428502 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:48.428812 kubelet[2686]: E0508 00:40:48.428539 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" podUID="e43f4851-92c0-4238-8905-f3f57d62dc20" May 8 00:40:48.675957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5-shm.mount: Deactivated successfully. May 8 00:40:48.676457 systemd[1]: run-netns-cni\x2d0bed8bf1\x2d410a\x2d2fd3\x2ddeda\x2d9701ec7a929f.mount: Deactivated successfully. May 8 00:40:48.676543 systemd[1]: run-netns-cni\x2ddc08fbb6\x2dca88\x2d3f56\x2db5b1\x2d9c87d2764e26.mount: Deactivated successfully. May 8 00:40:48.676614 systemd[1]: run-netns-cni\x2d8ce76072\x2dd685\x2d94f7\x2d2a19\x2dd2ca24a0a2df.mount: Deactivated successfully. May 8 00:40:48.676680 systemd[1]: run-netns-cni\x2d81a47856\x2d5f80\x2d64f9\x2d21ed\x2df40a6b2c02b4.mount: Deactivated successfully. May 8 00:40:49.262963 kubelet[2686]: I0508 00:40:49.261818 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1" May 8 00:40:49.263375 containerd[1484]: time="2025-05-08T00:40:49.262404602Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" May 8 00:40:49.263375 containerd[1484]: time="2025-05-08T00:40:49.262585174Z" level=info msg="Ensure that sandbox 8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1 in task-service has been cleanup successfully" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.264890801Z" level=info msg="TearDown network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" successfully" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.264910611Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" returns successfully" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.265343486Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.265415247Z" level=info msg="TearDown network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" successfully" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.265424377Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" returns successfully" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.265886203Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.266242316Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:40:49.266549 containerd[1484]: time="2025-05-08T00:40:49.266253436Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:40:49.266744 kubelet[2686]: I0508 00:40:49.266150 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5" May 8 00:40:49.265853 systemd[1]: run-netns-cni\x2dcc53f141\x2d8720\x2d8bc2\x2dae96\x2d578642e15c00.mount: Deactivated successfully. May 8 00:40:49.269250 containerd[1484]: time="2025-05-08T00:40:49.268037476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:3,}" May 8 00:40:49.269358 containerd[1484]: time="2025-05-08T00:40:49.269157249Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" May 8 00:40:49.269607 containerd[1484]: time="2025-05-08T00:40:49.269584425Z" level=info msg="Ensure that sandbox 7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5 in task-service has been cleanup successfully" May 8 00:40:49.272277 containerd[1484]: time="2025-05-08T00:40:49.269969509Z" level=info msg="TearDown network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" successfully" May 8 00:40:49.272277 containerd[1484]: time="2025-05-08T00:40:49.269983118Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" returns successfully" May 8 00:40:49.272329 kubelet[2686]: E0508 00:40:49.271582 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:49.274008 systemd[1]: run-netns-cni\x2d868c493c\x2dac08\x2d8944\x2d4f59\x2d73e1eb767ccf.mount: Deactivated successfully. May 8 00:40:49.275911 containerd[1484]: time="2025-05-08T00:40:49.275718035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:1,}" May 8 00:40:49.277134 kubelet[2686]: I0508 00:40:49.276348 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8" May 8 00:40:49.281755 containerd[1484]: time="2025-05-08T00:40:49.281709113Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" May 8 00:40:49.281900 containerd[1484]: time="2025-05-08T00:40:49.281859125Z" level=info msg="Ensure that sandbox 1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8 in task-service has been cleanup successfully" May 8 00:40:49.282343 containerd[1484]: time="2025-05-08T00:40:49.282105997Z" level=info msg="TearDown network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" successfully" May 8 00:40:49.282343 containerd[1484]: time="2025-05-08T00:40:49.282122997Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" returns successfully" May 8 00:40:49.282751 kubelet[2686]: E0508 00:40:49.282590 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:49.286973 systemd[1]: run-netns-cni\x2dd5f29ae6\x2dd1f0\x2d1b4a\x2de26f\x2d11752d76291a.mount: Deactivated successfully. May 8 00:40:49.288299 containerd[1484]: time="2025-05-08T00:40:49.288140926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:1,}" May 8 00:40:49.296197 kubelet[2686]: I0508 00:40:49.296080 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1" May 8 00:40:49.298597 containerd[1484]: time="2025-05-08T00:40:49.298430964Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" May 8 00:40:49.300939 containerd[1484]: time="2025-05-08T00:40:49.300916743Z" level=info msg="Ensure that sandbox b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1 in task-service has been cleanup successfully" May 8 00:40:49.303462 containerd[1484]: time="2025-05-08T00:40:49.303443181Z" level=info msg="TearDown network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" successfully" May 8 00:40:49.303543 containerd[1484]: time="2025-05-08T00:40:49.303529952Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" returns successfully" May 8 00:40:49.305091 containerd[1484]: time="2025-05-08T00:40:49.305070189Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:40:49.305344 containerd[1484]: time="2025-05-08T00:40:49.305233201Z" level=info msg="TearDown network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" successfully" May 8 00:40:49.305344 containerd[1484]: time="2025-05-08T00:40:49.305247581Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" returns successfully" May 8 00:40:49.306462 kubelet[2686]: I0508 00:40:49.306448 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75" May 8 00:40:49.307714 containerd[1484]: time="2025-05-08T00:40:49.305969850Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:40:49.309714 containerd[1484]: time="2025-05-08T00:40:49.309348708Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:40:49.309714 containerd[1484]: time="2025-05-08T00:40:49.309364729Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:40:49.310803 containerd[1484]: time="2025-05-08T00:40:49.310785165Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" May 8 00:40:49.311571 containerd[1484]: time="2025-05-08T00:40:49.311455702Z" level=info msg="Ensure that sandbox 7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75 in task-service has been cleanup successfully" May 8 00:40:49.311797 containerd[1484]: time="2025-05-08T00:40:49.311778206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:3,}" May 8 00:40:49.313366 containerd[1484]: time="2025-05-08T00:40:49.313345604Z" level=info msg="TearDown network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" successfully" May 8 00:40:49.314576 containerd[1484]: time="2025-05-08T00:40:49.314222684Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" returns successfully" May 8 00:40:49.316949 containerd[1484]: time="2025-05-08T00:40:49.316910605Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:40:49.317126 containerd[1484]: time="2025-05-08T00:40:49.317097507Z" level=info msg="TearDown network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" successfully" May 8 00:40:49.317598 containerd[1484]: time="2025-05-08T00:40:49.317116767Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" returns successfully" May 8 00:40:49.322401 containerd[1484]: time="2025-05-08T00:40:49.322366387Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:40:49.322480 containerd[1484]: time="2025-05-08T00:40:49.322453328Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:40:49.322480 containerd[1484]: time="2025-05-08T00:40:49.322471668Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:40:49.323034 containerd[1484]: time="2025-05-08T00:40:49.323004634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:3,}" May 8 00:40:49.333504 kubelet[2686]: I0508 00:40:49.331511 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5" May 8 00:40:49.344353 containerd[1484]: time="2025-05-08T00:40:49.344297048Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" May 8 00:40:49.344600 containerd[1484]: time="2025-05-08T00:40:49.344572651Z" level=info msg="Ensure that sandbox 9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5 in task-service has been cleanup successfully" May 8 00:40:49.344825 containerd[1484]: time="2025-05-08T00:40:49.344790214Z" level=info msg="TearDown network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" successfully" May 8 00:40:49.344862 containerd[1484]: time="2025-05-08T00:40:49.344836974Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" returns successfully" May 8 00:40:49.345433 containerd[1484]: time="2025-05-08T00:40:49.345183577Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:40:49.346371 containerd[1484]: time="2025-05-08T00:40:49.346344801Z" level=info msg="TearDown network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" successfully" May 8 00:40:49.346371 containerd[1484]: time="2025-05-08T00:40:49.346364611Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" returns successfully" May 8 00:40:49.347528 containerd[1484]: time="2025-05-08T00:40:49.347497184Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:40:49.347626 containerd[1484]: time="2025-05-08T00:40:49.347580225Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:40:49.347626 containerd[1484]: time="2025-05-08T00:40:49.347619346Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:40:49.347975 containerd[1484]: time="2025-05-08T00:40:49.347949020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:3,}" May 8 00:40:49.480082 containerd[1484]: time="2025-05-08T00:40:49.480039287Z" level=error msg="Failed to destroy network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.481949 containerd[1484]: time="2025-05-08T00:40:49.481922970Z" level=error msg="encountered an error cleaning up failed sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.482184 containerd[1484]: time="2025-05-08T00:40:49.482151402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.483370 kubelet[2686]: E0508 00:40:49.483335 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.483499 kubelet[2686]: E0508 00:40:49.483479 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:49.483560 kubelet[2686]: E0508 00:40:49.483546 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:49.483685 kubelet[2686]: E0508 00:40:49.483650 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" podUID="210d2f6d-8bde-4f98-93d8-48808afe079f" May 8 00:40:49.495405 containerd[1484]: time="2025-05-08T00:40:49.495381113Z" level=error msg="Failed to destroy network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.496732 containerd[1484]: time="2025-05-08T00:40:49.496712388Z" level=error msg="Failed to destroy network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.497300 containerd[1484]: time="2025-05-08T00:40:49.497279634Z" level=error msg="encountered an error cleaning up failed sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.497441 containerd[1484]: time="2025-05-08T00:40:49.497421266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.498067 kubelet[2686]: E0508 00:40:49.498044 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.498167 kubelet[2686]: E0508 00:40:49.498151 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:49.498299 kubelet[2686]: E0508 00:40:49.498282 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:49.498435 kubelet[2686]: E0508 00:40:49.498402 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ndt8s" podUID="b968d45f-0186-4bf1-af0b-3789d578367b" May 8 00:40:49.500826 containerd[1484]: time="2025-05-08T00:40:49.500802905Z" level=error msg="encountered an error cleaning up failed sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.500929 containerd[1484]: time="2025-05-08T00:40:49.500906956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.501537 kubelet[2686]: E0508 00:40:49.501440 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.501537 kubelet[2686]: E0508 00:40:49.501473 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:49.501537 kubelet[2686]: E0508 00:40:49.501489 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:49.501746 kubelet[2686]: E0508 00:40:49.501653 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t5sjv" podUID="1d395b40-74ec-4d21-9505-050a6c6424b9" May 8 00:40:49.512857 containerd[1484]: time="2025-05-08T00:40:49.512810312Z" level=error msg="Failed to destroy network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.513449 containerd[1484]: time="2025-05-08T00:40:49.513324068Z" level=error msg="Failed to destroy network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.515505 containerd[1484]: time="2025-05-08T00:40:49.515269710Z" level=error msg="encountered an error cleaning up failed sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.515505 containerd[1484]: time="2025-05-08T00:40:49.515324951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.516330 kubelet[2686]: E0508 00:40:49.515648 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.516330 kubelet[2686]: E0508 00:40:49.515677 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:49.516330 kubelet[2686]: E0508 00:40:49.515692 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:49.516423 containerd[1484]: time="2025-05-08T00:40:49.515742686Z" level=error msg="encountered an error cleaning up failed sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.516423 containerd[1484]: time="2025-05-08T00:40:49.515797586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.516495 kubelet[2686]: E0508 00:40:49.515715 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:49.516495 kubelet[2686]: E0508 00:40:49.515962 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.516495 kubelet[2686]: E0508 00:40:49.516019 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:49.516575 kubelet[2686]: E0508 00:40:49.516039 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:49.516575 kubelet[2686]: E0508 00:40:49.516076 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" podUID="a2974be7-7581-4fce-a16e-15f650ba010f" May 8 00:40:49.526074 containerd[1484]: time="2025-05-08T00:40:49.525855001Z" level=error msg="Failed to destroy network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.526444 containerd[1484]: time="2025-05-08T00:40:49.526404577Z" level=error msg="encountered an error cleaning up failed sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.526479 containerd[1484]: time="2025-05-08T00:40:49.526456928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.526982 kubelet[2686]: E0508 00:40:49.526689 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:49.526982 kubelet[2686]: E0508 00:40:49.526881 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:49.526982 kubelet[2686]: E0508 00:40:49.526898 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:49.527063 kubelet[2686]: E0508 00:40:49.526936 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" podUID="e43f4851-92c0-4238-8905-f3f57d62dc20" May 8 00:40:49.674157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac-shm.mount: Deactivated successfully. May 8 00:40:49.674400 systemd[1]: run-netns-cni\x2da5dee573\x2de0eb\x2df20b\x2d4c3c\x2df8202c80741e.mount: Deactivated successfully. May 8 00:40:49.674546 systemd[1]: run-netns-cni\x2da6d964c6\x2de006\x2daed6\x2d4af5\x2d0d0d3a421307.mount: Deactivated successfully. May 8 00:40:49.674680 systemd[1]: run-netns-cni\x2dbfb67f8b\x2d8ccb\x2db556\x2d8afe\x2de6153b9a36aa.mount: Deactivated successfully. May 8 00:40:50.335980 kubelet[2686]: I0508 00:40:50.335952 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a" May 8 00:40:50.339226 containerd[1484]: time="2025-05-08T00:40:50.337119664Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\"" May 8 00:40:50.339226 containerd[1484]: time="2025-05-08T00:40:50.337331787Z" level=info msg="Ensure that sandbox 630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a in task-service has been cleanup successfully" May 8 00:40:50.339913 containerd[1484]: time="2025-05-08T00:40:50.339845144Z" level=info msg="TearDown network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" successfully" May 8 00:40:50.339913 containerd[1484]: time="2025-05-08T00:40:50.339862284Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" returns successfully" May 8 00:40:50.340443 systemd[1]: run-netns-cni\x2d6bac8793\x2d5920\x2d5ad4\x2d83dd\x2d22bfe208c30c.mount: Deactivated successfully. May 8 00:40:50.342046 containerd[1484]: time="2025-05-08T00:40:50.342009306Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" May 8 00:40:50.342847 containerd[1484]: time="2025-05-08T00:40:50.342761955Z" level=info msg="TearDown network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" successfully" May 8 00:40:50.342847 containerd[1484]: time="2025-05-08T00:40:50.342777945Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" returns successfully" May 8 00:40:50.343231 containerd[1484]: time="2025-05-08T00:40:50.343102498Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:40:50.343231 containerd[1484]: time="2025-05-08T00:40:50.343178769Z" level=info msg="TearDown network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" successfully" May 8 00:40:50.343231 containerd[1484]: time="2025-05-08T00:40:50.343187789Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" returns successfully" May 8 00:40:50.344334 containerd[1484]: time="2025-05-08T00:40:50.344293091Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:40:50.344498 containerd[1484]: time="2025-05-08T00:40:50.344452432Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:40:50.344635 containerd[1484]: time="2025-05-08T00:40:50.344548914Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:40:50.345708 containerd[1484]: time="2025-05-08T00:40:50.345427803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:4,}" May 8 00:40:50.346457 kubelet[2686]: I0508 00:40:50.346425 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93" May 8 00:40:50.346932 containerd[1484]: time="2025-05-08T00:40:50.346914879Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\"" May 8 00:40:50.347353 containerd[1484]: time="2025-05-08T00:40:50.347335953Z" level=info msg="Ensure that sandbox 39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93 in task-service has been cleanup successfully" May 8 00:40:50.348492 containerd[1484]: time="2025-05-08T00:40:50.347831929Z" level=info msg="TearDown network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" successfully" May 8 00:40:50.348813 containerd[1484]: time="2025-05-08T00:40:50.348780279Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" returns successfully" May 8 00:40:50.350121 systemd[1]: run-netns-cni\x2d3b05aa42\x2dc17f\x2d6a0f\x2d0523\x2dea7b5d730d92.mount: Deactivated successfully. May 8 00:40:50.350847 containerd[1484]: time="2025-05-08T00:40:50.350829931Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" May 8 00:40:50.351745 containerd[1484]: time="2025-05-08T00:40:50.351012602Z" level=info msg="TearDown network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" successfully" May 8 00:40:50.351745 containerd[1484]: time="2025-05-08T00:40:50.351026162Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" returns successfully" May 8 00:40:50.353146 containerd[1484]: time="2025-05-08T00:40:50.353080044Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:40:50.353405 containerd[1484]: time="2025-05-08T00:40:50.353294026Z" level=info msg="TearDown network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" successfully" May 8 00:40:50.353405 containerd[1484]: time="2025-05-08T00:40:50.353319507Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" returns successfully" May 8 00:40:50.354030 containerd[1484]: time="2025-05-08T00:40:50.354011804Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:40:50.354273 containerd[1484]: time="2025-05-08T00:40:50.354257637Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:40:50.355763 containerd[1484]: time="2025-05-08T00:40:50.354332217Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:40:50.355763 containerd[1484]: time="2025-05-08T00:40:50.355597041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:4,}" May 8 00:40:50.355838 kubelet[2686]: I0508 00:40:50.355616 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4" May 8 00:40:50.356162 containerd[1484]: time="2025-05-08T00:40:50.356146387Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\"" May 8 00:40:50.356523 containerd[1484]: time="2025-05-08T00:40:50.356402490Z" level=info msg="Ensure that sandbox 254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4 in task-service has been cleanup successfully" May 8 00:40:50.356613 containerd[1484]: time="2025-05-08T00:40:50.356597782Z" level=info msg="TearDown network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" successfully" May 8 00:40:50.356825 containerd[1484]: time="2025-05-08T00:40:50.356666483Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" returns successfully" May 8 00:40:50.358654 systemd[1]: run-netns-cni\x2d9e8a9bbf\x2dad19\x2d6f90\x2dd19f\x2d56134211b0d1.mount: Deactivated successfully. May 8 00:40:50.360363 containerd[1484]: time="2025-05-08T00:40:50.360143499Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" May 8 00:40:50.360363 containerd[1484]: time="2025-05-08T00:40:50.360236340Z" level=info msg="TearDown network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" successfully" May 8 00:40:50.360363 containerd[1484]: time="2025-05-08T00:40:50.360246950Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" returns successfully" May 8 00:40:50.360516 kubelet[2686]: E0508 00:40:50.360494 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:50.362111 containerd[1484]: time="2025-05-08T00:40:50.362092540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:2,}" May 8 00:40:50.371001 kubelet[2686]: I0508 00:40:50.370261 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac" May 8 00:40:50.371331 containerd[1484]: time="2025-05-08T00:40:50.371312438Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\"" May 8 00:40:50.371509 containerd[1484]: time="2025-05-08T00:40:50.371493900Z" level=info msg="Ensure that sandbox 0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac in task-service has been cleanup successfully" May 8 00:40:50.371759 containerd[1484]: time="2025-05-08T00:40:50.371744473Z" level=info msg="TearDown network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" successfully" May 8 00:40:50.372072 containerd[1484]: time="2025-05-08T00:40:50.372055486Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" returns successfully" May 8 00:40:50.374416 containerd[1484]: time="2025-05-08T00:40:50.374397961Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" May 8 00:40:50.374625 containerd[1484]: time="2025-05-08T00:40:50.374610384Z" level=info msg="TearDown network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" successfully" May 8 00:40:50.374676 containerd[1484]: time="2025-05-08T00:40:50.374664574Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" returns successfully" May 8 00:40:50.376557 kubelet[2686]: I0508 00:40:50.376542 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052" May 8 00:40:50.377461 containerd[1484]: time="2025-05-08T00:40:50.377408413Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:40:50.378519 containerd[1484]: time="2025-05-08T00:40:50.378501515Z" level=info msg="TearDown network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" successfully" May 8 00:40:50.378670 containerd[1484]: time="2025-05-08T00:40:50.378649987Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" returns successfully" May 8 00:40:50.379767 containerd[1484]: time="2025-05-08T00:40:50.379729888Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\"" May 8 00:40:50.381670 containerd[1484]: time="2025-05-08T00:40:50.381634898Z" level=info msg="Ensure that sandbox 0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052 in task-service has been cleanup successfully" May 8 00:40:50.381934 containerd[1484]: time="2025-05-08T00:40:50.381913742Z" level=info msg="TearDown network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" successfully" May 8 00:40:50.382006 containerd[1484]: time="2025-05-08T00:40:50.381994022Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" returns successfully" May 8 00:40:50.386807 containerd[1484]: time="2025-05-08T00:40:50.386770283Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" May 8 00:40:50.386883 containerd[1484]: time="2025-05-08T00:40:50.386859244Z" level=info msg="TearDown network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" successfully" May 8 00:40:50.386883 containerd[1484]: time="2025-05-08T00:40:50.386876474Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" returns successfully" May 8 00:40:50.386952 containerd[1484]: time="2025-05-08T00:40:50.386931845Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:40:50.387489 containerd[1484]: time="2025-05-08T00:40:50.386999215Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:40:50.387489 containerd[1484]: time="2025-05-08T00:40:50.387014185Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:40:50.388014 kubelet[2686]: E0508 00:40:50.387999 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:50.392178 containerd[1484]: time="2025-05-08T00:40:50.392157440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:4,}" May 8 00:40:50.392740 containerd[1484]: time="2025-05-08T00:40:50.392700916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:2,}" May 8 00:40:50.395323 kubelet[2686]: I0508 00:40:50.395301 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976" May 8 00:40:50.398463 containerd[1484]: time="2025-05-08T00:40:50.398057423Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\"" May 8 00:40:50.398627 containerd[1484]: time="2025-05-08T00:40:50.398601299Z" level=info msg="Ensure that sandbox f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976 in task-service has been cleanup successfully" May 8 00:40:50.399044 containerd[1484]: time="2025-05-08T00:40:50.399025823Z" level=info msg="TearDown network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" successfully" May 8 00:40:50.399104 containerd[1484]: time="2025-05-08T00:40:50.399092034Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" returns successfully" May 8 00:40:50.400881 containerd[1484]: time="2025-05-08T00:40:50.400855473Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" May 8 00:40:50.400950 containerd[1484]: time="2025-05-08T00:40:50.400929084Z" level=info msg="TearDown network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" successfully" May 8 00:40:50.400950 containerd[1484]: time="2025-05-08T00:40:50.400943314Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" returns successfully" May 8 00:40:50.402283 containerd[1484]: time="2025-05-08T00:40:50.402179487Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:40:50.403434 containerd[1484]: time="2025-05-08T00:40:50.403362809Z" level=info msg="TearDown network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" successfully" May 8 00:40:50.404083 containerd[1484]: time="2025-05-08T00:40:50.404061576Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" returns successfully" May 8 00:40:50.405840 containerd[1484]: time="2025-05-08T00:40:50.405812616Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:40:50.406097 containerd[1484]: time="2025-05-08T00:40:50.406077728Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:40:50.406242 containerd[1484]: time="2025-05-08T00:40:50.406095068Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:40:50.409197 containerd[1484]: time="2025-05-08T00:40:50.409177971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:4,}" May 8 00:40:50.584811 containerd[1484]: time="2025-05-08T00:40:50.584765440Z" level=error msg="Failed to destroy network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.585070 containerd[1484]: time="2025-05-08T00:40:50.584952453Z" level=error msg="Failed to destroy network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.586315 containerd[1484]: time="2025-05-08T00:40:50.585955653Z" level=error msg="encountered an error cleaning up failed sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.586315 containerd[1484]: time="2025-05-08T00:40:50.586012413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.586413 kubelet[2686]: E0508 00:40:50.586283 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.586413 kubelet[2686]: E0508 00:40:50.586337 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:50.586413 kubelet[2686]: E0508 00:40:50.586357 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:50.587583 kubelet[2686]: E0508 00:40:50.586676 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" podUID="a2974be7-7581-4fce-a16e-15f650ba010f" May 8 00:40:50.588476 containerd[1484]: time="2025-05-08T00:40:50.588355888Z" level=error msg="encountered an error cleaning up failed sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.588569 containerd[1484]: time="2025-05-08T00:40:50.588550271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.590714 kubelet[2686]: E0508 00:40:50.590685 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.590781 kubelet[2686]: E0508 00:40:50.590725 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:50.590781 kubelet[2686]: E0508 00:40:50.590745 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:50.590839 kubelet[2686]: E0508 00:40:50.590774 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t5sjv" podUID="1d395b40-74ec-4d21-9505-050a6c6424b9" May 8 00:40:50.659685 containerd[1484]: time="2025-05-08T00:40:50.659580207Z" level=error msg="Failed to destroy network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.660954 containerd[1484]: time="2025-05-08T00:40:50.660684788Z" level=error msg="encountered an error cleaning up failed sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.662377 containerd[1484]: time="2025-05-08T00:40:50.660930471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.663048 kubelet[2686]: E0508 00:40:50.662913 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.663048 kubelet[2686]: E0508 00:40:50.662992 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:50.663048 kubelet[2686]: E0508 00:40:50.663012 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:50.663689 kubelet[2686]: E0508 00:40:50.663464 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" podUID="e43f4851-92c0-4238-8905-f3f57d62dc20" May 8 00:40:50.680012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd-shm.mount: Deactivated successfully. May 8 00:40:50.681246 systemd[1]: run-netns-cni\x2ddfbc076d\x2deaec\x2deee8\x2df8ca\x2d0ced5588bfd0.mount: Deactivated successfully. May 8 00:40:50.681325 systemd[1]: run-netns-cni\x2d1f2b6b3d\x2dd170\x2d407e\x2db702\x2dfc9be6ed36fb.mount: Deactivated successfully. May 8 00:40:50.681383 systemd[1]: run-netns-cni\x2d8822de51\x2da7be\x2d1f51\x2dd9db\x2dd24d76855d58.mount: Deactivated successfully. May 8 00:40:50.690261 containerd[1484]: time="2025-05-08T00:40:50.690232942Z" level=error msg="Failed to destroy network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.692430 containerd[1484]: time="2025-05-08T00:40:50.692311655Z" level=error msg="Failed to destroy network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.694847 containerd[1484]: time="2025-05-08T00:40:50.694778361Z" level=error msg="encountered an error cleaning up failed sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.694994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a-shm.mount: Deactivated successfully. May 8 00:40:50.697475 containerd[1484]: time="2025-05-08T00:40:50.697274527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.697475 containerd[1484]: time="2025-05-08T00:40:50.695865233Z" level=error msg="encountered an error cleaning up failed sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.697475 containerd[1484]: time="2025-05-08T00:40:50.697414209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.698660 kubelet[2686]: E0508 00:40:50.698612 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.698705 kubelet[2686]: E0508 00:40:50.698670 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:50.698705 kubelet[2686]: E0508 00:40:50.698692 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:50.699485 kubelet[2686]: E0508 00:40:50.698729 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" podUID="210d2f6d-8bde-4f98-93d8-48808afe079f" May 8 00:40:50.699799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c-shm.mount: Deactivated successfully. May 8 00:40:50.700604 kubelet[2686]: E0508 00:40:50.700176 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.700604 kubelet[2686]: E0508 00:40:50.700270 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:50.700604 kubelet[2686]: E0508 00:40:50.700284 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:50.700836 kubelet[2686]: E0508 00:40:50.700310 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ndt8s" podUID="b968d45f-0186-4bf1-af0b-3789d578367b" May 8 00:40:50.710743 containerd[1484]: time="2025-05-08T00:40:50.710521759Z" level=error msg="Failed to destroy network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.712559 containerd[1484]: time="2025-05-08T00:40:50.712531720Z" level=error msg="encountered an error cleaning up failed sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.713842 containerd[1484]: time="2025-05-08T00:40:50.713489430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.714772 kubelet[2686]: E0508 00:40:50.713601 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:50.714772 kubelet[2686]: E0508 00:40:50.713630 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:50.714772 kubelet[2686]: E0508 00:40:50.713646 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:50.713543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174-shm.mount: Deactivated successfully. May 8 00:40:50.715049 kubelet[2686]: E0508 00:40:50.713675 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:51.304741 kubelet[2686]: I0508 00:40:51.303886 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:51.304741 kubelet[2686]: E0508 00:40:51.304415 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:51.389507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529747076.mount: Deactivated successfully. May 8 00:40:51.399824 kubelet[2686]: I0508 00:40:51.399000 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b" May 8 00:40:51.400664 containerd[1484]: time="2025-05-08T00:40:51.400233089Z" level=info msg="StopPodSandbox for \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\"" May 8 00:40:51.400664 containerd[1484]: time="2025-05-08T00:40:51.400426581Z" level=info msg="Ensure that sandbox e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b in task-service has been cleanup successfully" May 8 00:40:51.401310 containerd[1484]: time="2025-05-08T00:40:51.401247549Z" level=info msg="TearDown network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" successfully" May 8 00:40:51.401310 containerd[1484]: time="2025-05-08T00:40:51.401268199Z" level=info msg="StopPodSandbox for \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" returns successfully" May 8 00:40:51.403155 containerd[1484]: time="2025-05-08T00:40:51.403112387Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\"" May 8 00:40:51.403285 containerd[1484]: time="2025-05-08T00:40:51.403194368Z" level=info msg="TearDown network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" successfully" May 8 00:40:51.403285 containerd[1484]: time="2025-05-08T00:40:51.403250319Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" returns successfully" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.403594183Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.403666623Z" level=info msg="TearDown network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" successfully" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.403676753Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" returns successfully" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.404279639Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.404348360Z" level=info msg="TearDown network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" successfully" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.404357990Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" returns successfully" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.404766124Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.404998666Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:40:51.405266 containerd[1484]: time="2025-05-08T00:40:51.405010926Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:40:51.405713 containerd[1484]: time="2025-05-08T00:40:51.405532542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:5,}" May 8 00:40:51.406193 kubelet[2686]: I0508 00:40:51.406172 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c" May 8 00:40:51.406930 containerd[1484]: time="2025-05-08T00:40:51.406905786Z" level=info msg="StopPodSandbox for \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\"" May 8 00:40:51.407058 containerd[1484]: time="2025-05-08T00:40:51.407026896Z" level=info msg="Ensure that sandbox 9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c in task-service has been cleanup successfully" May 8 00:40:51.408265 containerd[1484]: time="2025-05-08T00:40:51.408239478Z" level=info msg="TearDown network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" successfully" May 8 00:40:51.408265 containerd[1484]: time="2025-05-08T00:40:51.408258268Z" level=info msg="StopPodSandbox for \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" returns successfully" May 8 00:40:51.408552 containerd[1484]: time="2025-05-08T00:40:51.408523361Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\"" May 8 00:40:51.409042 containerd[1484]: time="2025-05-08T00:40:51.409012866Z" level=info msg="TearDown network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" successfully" May 8 00:40:51.409042 containerd[1484]: time="2025-05-08T00:40:51.409032306Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" returns successfully" May 8 00:40:51.409183 kubelet[2686]: I0508 00:40:51.409150 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a" May 8 00:40:51.409428 containerd[1484]: time="2025-05-08T00:40:51.409402540Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" May 8 00:40:51.409486 containerd[1484]: time="2025-05-08T00:40:51.409464721Z" level=info msg="TearDown network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" successfully" May 8 00:40:51.409486 containerd[1484]: time="2025-05-08T00:40:51.409479521Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" returns successfully" May 8 00:40:51.409984 containerd[1484]: time="2025-05-08T00:40:51.409948146Z" level=info msg="StopPodSandbox for \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\"" May 8 00:40:51.411165 containerd[1484]: time="2025-05-08T00:40:51.411125987Z" level=info msg="Ensure that sandbox bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a in task-service has been cleanup successfully" May 8 00:40:51.411375 containerd[1484]: time="2025-05-08T00:40:51.411348709Z" level=info msg="TearDown network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" successfully" May 8 00:40:51.411375 containerd[1484]: time="2025-05-08T00:40:51.411369489Z" level=info msg="StopPodSandbox for \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" returns successfully" May 8 00:40:51.411445 containerd[1484]: time="2025-05-08T00:40:51.411416580Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:40:51.411876 containerd[1484]: time="2025-05-08T00:40:51.411837324Z" level=info msg="TearDown network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" successfully" May 8 00:40:51.411876 containerd[1484]: time="2025-05-08T00:40:51.411859555Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" returns successfully" May 8 00:40:51.412737 containerd[1484]: time="2025-05-08T00:40:51.412708643Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\"" May 8 00:40:51.412794 containerd[1484]: time="2025-05-08T00:40:51.412772534Z" level=info msg="TearDown network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" successfully" May 8 00:40:51.412794 containerd[1484]: time="2025-05-08T00:40:51.412788514Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" returns successfully" May 8 00:40:51.412856 containerd[1484]: time="2025-05-08T00:40:51.412813204Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:40:51.412876 containerd[1484]: time="2025-05-08T00:40:51.412859845Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:40:51.412876 containerd[1484]: time="2025-05-08T00:40:51.412866875Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:40:51.413228 containerd[1484]: time="2025-05-08T00:40:51.413177147Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" May 8 00:40:51.413857 containerd[1484]: time="2025-05-08T00:40:51.413308949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:5,}" May 8 00:40:51.413901 containerd[1484]: time="2025-05-08T00:40:51.413766304Z" level=info msg="TearDown network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" successfully" May 8 00:40:51.414007 containerd[1484]: time="2025-05-08T00:40:51.413877285Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" returns successfully" May 8 00:40:51.414487 kubelet[2686]: E0508 00:40:51.414463 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:51.415189 containerd[1484]: time="2025-05-08T00:40:51.415151587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:3,}" May 8 00:40:51.415644 kubelet[2686]: I0508 00:40:51.415597 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc" May 8 00:40:51.416538 containerd[1484]: time="2025-05-08T00:40:51.416511191Z" level=info msg="StopPodSandbox for \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\"" May 8 00:40:51.416649 containerd[1484]: time="2025-05-08T00:40:51.416628182Z" level=info msg="Ensure that sandbox ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc in task-service has been cleanup successfully" May 8 00:40:51.416846 containerd[1484]: time="2025-05-08T00:40:51.416822764Z" level=info msg="TearDown network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" successfully" May 8 00:40:51.416846 containerd[1484]: time="2025-05-08T00:40:51.416842074Z" level=info msg="StopPodSandbox for \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" returns successfully" May 8 00:40:51.417171 containerd[1484]: time="2025-05-08T00:40:51.417149246Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\"" May 8 00:40:51.417317 containerd[1484]: time="2025-05-08T00:40:51.417302558Z" level=info msg="TearDown network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" successfully" May 8 00:40:51.417398 containerd[1484]: time="2025-05-08T00:40:51.417355459Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" returns successfully" May 8 00:40:51.418142 containerd[1484]: time="2025-05-08T00:40:51.417985826Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" May 8 00:40:51.418142 containerd[1484]: time="2025-05-08T00:40:51.418047235Z" level=info msg="TearDown network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" successfully" May 8 00:40:51.418142 containerd[1484]: time="2025-05-08T00:40:51.418054955Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" returns successfully" May 8 00:40:51.418346 kubelet[2686]: E0508 00:40:51.418324 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:51.418520 containerd[1484]: time="2025-05-08T00:40:51.418505400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:3,}" May 8 00:40:51.419588 kubelet[2686]: I0508 00:40:51.419569 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174" May 8 00:40:51.420762 containerd[1484]: time="2025-05-08T00:40:51.420738053Z" level=info msg="StopPodSandbox for \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\"" May 8 00:40:51.420908 containerd[1484]: time="2025-05-08T00:40:51.420885434Z" level=info msg="Ensure that sandbox c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174 in task-service has been cleanup successfully" May 8 00:40:51.421226 containerd[1484]: time="2025-05-08T00:40:51.421149866Z" level=info msg="TearDown network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" successfully" May 8 00:40:51.421226 containerd[1484]: time="2025-05-08T00:40:51.421170146Z" level=info msg="StopPodSandbox for \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" returns successfully" May 8 00:40:51.421946 containerd[1484]: time="2025-05-08T00:40:51.421757263Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\"" May 8 00:40:51.421946 containerd[1484]: time="2025-05-08T00:40:51.421841904Z" level=info msg="TearDown network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" successfully" May 8 00:40:51.421946 containerd[1484]: time="2025-05-08T00:40:51.421851704Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" returns successfully" May 8 00:40:51.422840 containerd[1484]: time="2025-05-08T00:40:51.422803373Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" May 8 00:40:51.422960 containerd[1484]: time="2025-05-08T00:40:51.422947785Z" level=info msg="TearDown network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" successfully" May 8 00:40:51.423021 containerd[1484]: time="2025-05-08T00:40:51.423011494Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" returns successfully" May 8 00:40:51.424121 containerd[1484]: time="2025-05-08T00:40:51.424079845Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:40:51.424233 containerd[1484]: time="2025-05-08T00:40:51.424159066Z" level=info msg="TearDown network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" successfully" May 8 00:40:51.424233 containerd[1484]: time="2025-05-08T00:40:51.424174326Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" returns successfully" May 8 00:40:51.424505 containerd[1484]: time="2025-05-08T00:40:51.424470049Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:40:51.426095 containerd[1484]: time="2025-05-08T00:40:51.425045715Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:40:51.426095 containerd[1484]: time="2025-05-08T00:40:51.425063525Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:40:51.426144 kubelet[2686]: E0508 00:40:51.425743 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:51.426144 kubelet[2686]: I0508 00:40:51.425788 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd" May 8 00:40:51.426367 containerd[1484]: time="2025-05-08T00:40:51.426348908Z" level=info msg="StopPodSandbox for \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\"" May 8 00:40:51.426692 containerd[1484]: time="2025-05-08T00:40:51.426676541Z" level=info msg="Ensure that sandbox 6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd in task-service has been cleanup successfully" May 8 00:40:51.427004 containerd[1484]: time="2025-05-08T00:40:51.426988564Z" level=info msg="TearDown network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" successfully" May 8 00:40:51.427083 containerd[1484]: time="2025-05-08T00:40:51.427070405Z" level=info msg="StopPodSandbox for \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" returns successfully" May 8 00:40:51.427825 containerd[1484]: time="2025-05-08T00:40:51.427808433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:5,}" May 8 00:40:51.427992 containerd[1484]: time="2025-05-08T00:40:51.427860583Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\"" May 8 00:40:51.428061 containerd[1484]: time="2025-05-08T00:40:51.428040074Z" level=info msg="TearDown network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" successfully" May 8 00:40:51.428061 containerd[1484]: time="2025-05-08T00:40:51.428055525Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" returns successfully" May 8 00:40:51.428334 containerd[1484]: time="2025-05-08T00:40:51.428307897Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" May 8 00:40:51.428405 containerd[1484]: time="2025-05-08T00:40:51.428383868Z" level=info msg="TearDown network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" successfully" May 8 00:40:51.428405 containerd[1484]: time="2025-05-08T00:40:51.428400718Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" returns successfully" May 8 00:40:51.428637 containerd[1484]: time="2025-05-08T00:40:51.428601081Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:40:51.428896 containerd[1484]: time="2025-05-08T00:40:51.428692452Z" level=info msg="TearDown network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" successfully" May 8 00:40:51.428966 containerd[1484]: time="2025-05-08T00:40:51.428942854Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" returns successfully" May 8 00:40:51.430279 containerd[1484]: time="2025-05-08T00:40:51.430262987Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:40:51.430432 containerd[1484]: time="2025-05-08T00:40:51.430419528Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:40:51.430609 containerd[1484]: time="2025-05-08T00:40:51.430584160Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:40:51.431064 containerd[1484]: time="2025-05-08T00:40:51.431036974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:5,}" May 8 00:40:51.454502 containerd[1484]: time="2025-05-08T00:40:51.454269935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:40:51.456338 containerd[1484]: time="2025-05-08T00:40:51.456310905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:51.458694 containerd[1484]: time="2025-05-08T00:40:51.458674359Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:51.463185 containerd[1484]: time="2025-05-08T00:40:51.463145553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:51.467197 containerd[1484]: time="2025-05-08T00:40:51.465592727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 4.232485601s" May 8 00:40:51.467197 containerd[1484]: time="2025-05-08T00:40:51.467003411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:40:51.478635 containerd[1484]: time="2025-05-08T00:40:51.478598386Z" level=info msg="CreateContainer within sandbox \"3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:40:51.523300 containerd[1484]: time="2025-05-08T00:40:51.523270289Z" level=info msg="CreateContainer within sandbox \"3acb7485addf6eb8d4a63ecf8d3d155204cb260d837e2abb3fda877d017287d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42\"" May 8 00:40:51.527399 containerd[1484]: time="2025-05-08T00:40:51.527377280Z" level=info msg="StartContainer for \"7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42\"" May 8 00:40:51.623800 systemd[1]: Started cri-containerd-7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42.scope - libcontainer container 7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42. May 8 00:40:51.667565 containerd[1484]: time="2025-05-08T00:40:51.667533130Z" level=error msg="Failed to destroy network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.668627 containerd[1484]: time="2025-05-08T00:40:51.668489559Z" level=error msg="encountered an error cleaning up failed sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.668627 containerd[1484]: time="2025-05-08T00:40:51.668543720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.668884 kubelet[2686]: E0508 00:40:51.668731 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.669051 kubelet[2686]: E0508 00:40:51.669021 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:51.669251 kubelet[2686]: E0508 00:40:51.669053 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t5sjv" May 8 00:40:51.670336 kubelet[2686]: E0508 00:40:51.669281 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t5sjv_kube-system(1d395b40-74ec-4d21-9505-050a6c6424b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t5sjv" podUID="1d395b40-74ec-4d21-9505-050a6c6424b9" May 8 00:40:51.679420 containerd[1484]: time="2025-05-08T00:40:51.679396747Z" level=error msg="Failed to destroy network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.679781 containerd[1484]: time="2025-05-08T00:40:51.679759541Z" level=error msg="encountered an error cleaning up failed sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.679867 containerd[1484]: time="2025-05-08T00:40:51.679848092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.681369 kubelet[2686]: E0508 00:40:51.681261 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.681369 kubelet[2686]: E0508 00:40:51.681321 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:51.681369 kubelet[2686]: E0508 00:40:51.681339 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" May 8 00:40:51.683364 kubelet[2686]: E0508 00:40:51.683324 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-vgm4z_calico-apiserver(e43f4851-92c0-4238-8905-f3f57d62dc20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" podUID="e43f4851-92c0-4238-8905-f3f57d62dc20" May 8 00:40:51.690162 containerd[1484]: time="2025-05-08T00:40:51.690138323Z" level=error msg="Failed to destroy network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.690581 containerd[1484]: time="2025-05-08T00:40:51.690559138Z" level=error msg="encountered an error cleaning up failed sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.690708 containerd[1484]: time="2025-05-08T00:40:51.690672699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.691230 kubelet[2686]: E0508 00:40:51.690896 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.691230 kubelet[2686]: E0508 00:40:51.690951 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:51.691230 kubelet[2686]: E0508 00:40:51.690976 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8q6q" May 8 00:40:51.691320 kubelet[2686]: E0508 00:40:51.691019 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q8q6q_calico-system(ae949d8a-9850-4b3f-b127-0cc79fb660b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q8q6q" podUID="ae949d8a-9850-4b3f-b127-0cc79fb660b3" May 8 00:40:51.692345 systemd[1]: run-netns-cni\x2df8dc3df9\x2da56b\x2dc828\x2df6b5\x2d483e04866f36.mount: Deactivated successfully. May 8 00:40:51.692453 systemd[1]: run-netns-cni\x2d055dfa3b\x2df12b\x2d6220\x2d11ec\x2d70fbc529bde1.mount: Deactivated successfully. May 8 00:40:51.692534 systemd[1]: run-netns-cni\x2d88396a97\x2d1386\x2dfab4\x2df977\x2ddb1eda5d7087.mount: Deactivated successfully. May 8 00:40:51.692611 systemd[1]: run-netns-cni\x2dae845b42\x2d7d5c\x2d268f\x2d3292\x2d833b61ac0999.mount: Deactivated successfully. May 8 00:40:51.692681 systemd[1]: run-netns-cni\x2d3bf7920e\x2d4d39\x2d0755\x2d1787\x2df39302756ca3.mount: Deactivated successfully. May 8 00:40:51.692744 systemd[1]: run-netns-cni\x2d6e0eba2b\x2d0ed2\x2d6d09\x2d8cb1\x2d468e491962e2.mount: Deactivated successfully. May 8 00:40:51.700398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe-shm.mount: Deactivated successfully. May 8 00:40:51.701691 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804-shm.mount: Deactivated successfully. May 8 00:40:51.706391 containerd[1484]: time="2025-05-08T00:40:51.706348004Z" level=error msg="Failed to destroy network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.706762 containerd[1484]: time="2025-05-08T00:40:51.706727768Z" level=error msg="encountered an error cleaning up failed sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.707242 containerd[1484]: time="2025-05-08T00:40:51.706773879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.707296 kubelet[2686]: E0508 00:40:51.707021 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.707296 kubelet[2686]: E0508 00:40:51.707060 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:51.707296 kubelet[2686]: E0508 00:40:51.707079 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ndt8s" May 8 00:40:51.707372 kubelet[2686]: E0508 00:40:51.707111 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ndt8s_kube-system(b968d45f-0186-4bf1-af0b-3789d578367b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ndt8s" podUID="b968d45f-0186-4bf1-af0b-3789d578367b" May 8 00:40:51.710461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c-shm.mount: Deactivated successfully. May 8 00:40:51.718056 containerd[1484]: time="2025-05-08T00:40:51.718002299Z" level=error msg="Failed to destroy network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.719387 containerd[1484]: time="2025-05-08T00:40:51.719330793Z" level=error msg="encountered an error cleaning up failed sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.719591 containerd[1484]: time="2025-05-08T00:40:51.719558535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.719902 kubelet[2686]: E0508 00:40:51.719867 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.719954 kubelet[2686]: E0508 00:40:51.719927 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:51.719980 kubelet[2686]: E0508 00:40:51.719949 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" May 8 00:40:51.720159 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2-shm.mount: Deactivated successfully. May 8 00:40:51.720333 kubelet[2686]: E0508 00:40:51.720253 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95f5468f8-zknsp_calico-apiserver(a2974be7-7581-4fce-a16e-15f650ba010f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" podUID="a2974be7-7581-4fce-a16e-15f650ba010f" May 8 00:40:51.722052 containerd[1484]: time="2025-05-08T00:40:51.721918639Z" level=error msg="Failed to destroy network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.723075 containerd[1484]: time="2025-05-08T00:40:51.722456924Z" level=error msg="encountered an error cleaning up failed sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.723075 containerd[1484]: time="2025-05-08T00:40:51.722495334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.723194 kubelet[2686]: E0508 00:40:51.722635 2686 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:51.723194 kubelet[2686]: E0508 00:40:51.722665 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:51.723194 kubelet[2686]: E0508 00:40:51.722681 2686 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" May 8 00:40:51.723447 kubelet[2686]: E0508 00:40:51.722709 2686 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66457cb4b-4cpwk_calico-system(210d2f6d-8bde-4f98-93d8-48808afe079f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" podUID="210d2f6d-8bde-4f98-93d8-48808afe079f" May 8 00:40:51.724917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414-shm.mount: Deactivated successfully. May 8 00:40:51.741287 containerd[1484]: time="2025-05-08T00:40:51.740894367Z" level=info msg="StartContainer for \"7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42\" returns successfully" May 8 00:40:51.808272 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:40:51.808363 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:40:52.429132 kubelet[2686]: I0508 00:40:52.429106 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414" May 8 00:40:52.430064 containerd[1484]: time="2025-05-08T00:40:52.429993056Z" level=info msg="StopPodSandbox for \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\"" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.430259309Z" level=info msg="Ensure that sandbox e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414 in task-service has been cleanup successfully" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.430687513Z" level=info msg="TearDown network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\" successfully" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.430702964Z" level=info msg="StopPodSandbox for \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\" returns successfully" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.432720252Z" level=info msg="StopPodSandbox for \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\"" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.432819583Z" level=info msg="TearDown network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" successfully" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.432832123Z" level=info msg="StopPodSandbox for \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" returns successfully" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.433164526Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\"" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.433457289Z" level=info msg="TearDown network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" successfully" May 8 00:40:52.434307 containerd[1484]: time="2025-05-08T00:40:52.433472319Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" returns successfully" May 8 00:40:52.434532 containerd[1484]: time="2025-05-08T00:40:52.434363737Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" May 8 00:40:52.434532 containerd[1484]: time="2025-05-08T00:40:52.434443848Z" level=info msg="TearDown network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" successfully" May 8 00:40:52.434532 containerd[1484]: time="2025-05-08T00:40:52.434456008Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" returns successfully" May 8 00:40:52.435855 containerd[1484]: time="2025-05-08T00:40:52.434881462Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:40:52.435855 containerd[1484]: time="2025-05-08T00:40:52.434963823Z" level=info msg="TearDown network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" successfully" May 8 00:40:52.435855 containerd[1484]: time="2025-05-08T00:40:52.434972943Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" returns successfully" May 8 00:40:52.435855 containerd[1484]: time="2025-05-08T00:40:52.435556508Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:40:52.435855 containerd[1484]: time="2025-05-08T00:40:52.435659819Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:40:52.435855 containerd[1484]: time="2025-05-08T00:40:52.435680530Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:40:52.436236 kubelet[2686]: I0508 00:40:52.435196 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c" May 8 00:40:52.436532 containerd[1484]: time="2025-05-08T00:40:52.436369096Z" level=info msg="StopPodSandbox for \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\"" May 8 00:40:52.436709 containerd[1484]: time="2025-05-08T00:40:52.436692639Z" level=info msg="Ensure that sandbox 16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c in task-service has been cleanup successfully" May 8 00:40:52.436853 containerd[1484]: time="2025-05-08T00:40:52.436824540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:6,}" May 8 00:40:52.437091 containerd[1484]: time="2025-05-08T00:40:52.437013171Z" level=info msg="TearDown network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\" successfully" May 8 00:40:52.437091 containerd[1484]: time="2025-05-08T00:40:52.437029141Z" level=info msg="StopPodSandbox for \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\" returns successfully" May 8 00:40:52.438628 containerd[1484]: time="2025-05-08T00:40:52.438456655Z" level=info msg="StopPodSandbox for \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\"" May 8 00:40:52.438628 containerd[1484]: time="2025-05-08T00:40:52.438564726Z" level=info msg="TearDown network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" successfully" May 8 00:40:52.438628 containerd[1484]: time="2025-05-08T00:40:52.438576756Z" level=info msg="StopPodSandbox for \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" returns successfully" May 8 00:40:52.440051 containerd[1484]: time="2025-05-08T00:40:52.440011079Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\"" May 8 00:40:52.441123 containerd[1484]: time="2025-05-08T00:40:52.440634485Z" level=info msg="TearDown network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" successfully" May 8 00:40:52.441123 containerd[1484]: time="2025-05-08T00:40:52.440651655Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" returns successfully" May 8 00:40:52.441707 containerd[1484]: time="2025-05-08T00:40:52.441677165Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" May 8 00:40:52.442187 containerd[1484]: time="2025-05-08T00:40:52.441901417Z" level=info msg="TearDown network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" successfully" May 8 00:40:52.442187 containerd[1484]: time="2025-05-08T00:40:52.441974698Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" returns successfully" May 8 00:40:52.442847 kubelet[2686]: I0508 00:40:52.442367 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df" May 8 00:40:52.442946 containerd[1484]: time="2025-05-08T00:40:52.442885046Z" level=info msg="StopPodSandbox for \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\"" May 8 00:40:52.443508 containerd[1484]: time="2025-05-08T00:40:52.443058657Z" level=info msg="Ensure that sandbox 0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df in task-service has been cleanup successfully" May 8 00:40:52.443555 kubelet[2686]: E0508 00:40:52.443111 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:52.444285 containerd[1484]: time="2025-05-08T00:40:52.444253078Z" level=info msg="TearDown network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\" successfully" May 8 00:40:52.444285 containerd[1484]: time="2025-05-08T00:40:52.444276698Z" level=info msg="StopPodSandbox for \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\" returns successfully" May 8 00:40:52.444821 containerd[1484]: time="2025-05-08T00:40:52.444799814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:4,}" May 8 00:40:52.446182 containerd[1484]: time="2025-05-08T00:40:52.446057205Z" level=info msg="StopPodSandbox for \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\"" May 8 00:40:52.446182 containerd[1484]: time="2025-05-08T00:40:52.446156426Z" level=info msg="TearDown network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" successfully" May 8 00:40:52.446182 containerd[1484]: time="2025-05-08T00:40:52.446166656Z" level=info msg="StopPodSandbox for \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" returns successfully" May 8 00:40:52.447554 containerd[1484]: time="2025-05-08T00:40:52.447528599Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\"" May 8 00:40:52.447645 containerd[1484]: time="2025-05-08T00:40:52.447612140Z" level=info msg="TearDown network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" successfully" May 8 00:40:52.447645 containerd[1484]: time="2025-05-08T00:40:52.447632010Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" returns successfully" May 8 00:40:52.448388 containerd[1484]: time="2025-05-08T00:40:52.448369186Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" May 8 00:40:52.448614 containerd[1484]: time="2025-05-08T00:40:52.448589569Z" level=info msg="TearDown network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" successfully" May 8 00:40:52.448757 containerd[1484]: time="2025-05-08T00:40:52.448656459Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" returns successfully" May 8 00:40:52.449037 kubelet[2686]: E0508 00:40:52.448924 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:52.449441 containerd[1484]: time="2025-05-08T00:40:52.449346465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:4,}" May 8 00:40:52.452556 kubelet[2686]: E0508 00:40:52.452522 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:52.462370 kubelet[2686]: I0508 00:40:52.462332 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2" May 8 00:40:52.464370 containerd[1484]: time="2025-05-08T00:40:52.464337764Z" level=info msg="StopPodSandbox for \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\"" May 8 00:40:52.464671 containerd[1484]: time="2025-05-08T00:40:52.464639897Z" level=info msg="Ensure that sandbox d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2 in task-service has been cleanup successfully" May 8 00:40:52.469226 containerd[1484]: time="2025-05-08T00:40:52.466802497Z" level=info msg="TearDown network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\" successfully" May 8 00:40:52.469226 containerd[1484]: time="2025-05-08T00:40:52.466823637Z" level=info msg="StopPodSandbox for \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\" returns successfully" May 8 00:40:52.469752 containerd[1484]: time="2025-05-08T00:40:52.469621653Z" level=info msg="StopPodSandbox for \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\"" May 8 00:40:52.469752 containerd[1484]: time="2025-05-08T00:40:52.469696204Z" level=info msg="TearDown network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" successfully" May 8 00:40:52.469752 containerd[1484]: time="2025-05-08T00:40:52.469706994Z" level=info msg="StopPodSandbox for \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" returns successfully" May 8 00:40:52.470771 containerd[1484]: time="2025-05-08T00:40:52.470425820Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\"" May 8 00:40:52.470771 containerd[1484]: time="2025-05-08T00:40:52.470537831Z" level=info msg="TearDown network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" successfully" May 8 00:40:52.470771 containerd[1484]: time="2025-05-08T00:40:52.470550341Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" returns successfully" May 8 00:40:52.473144 containerd[1484]: time="2025-05-08T00:40:52.472769952Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" May 8 00:40:52.473878 containerd[1484]: time="2025-05-08T00:40:52.473713561Z" level=info msg="TearDown network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" successfully" May 8 00:40:52.473878 containerd[1484]: time="2025-05-08T00:40:52.473736391Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" returns successfully" May 8 00:40:52.474544 containerd[1484]: time="2025-05-08T00:40:52.474327126Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:40:52.474544 containerd[1484]: time="2025-05-08T00:40:52.474399327Z" level=info msg="TearDown network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" successfully" May 8 00:40:52.474544 containerd[1484]: time="2025-05-08T00:40:52.474408717Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" returns successfully" May 8 00:40:52.477324 containerd[1484]: time="2025-05-08T00:40:52.476409275Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:40:52.477324 containerd[1484]: time="2025-05-08T00:40:52.476490996Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:40:52.477324 containerd[1484]: time="2025-05-08T00:40:52.476501806Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:40:52.478574 kubelet[2686]: I0508 00:40:52.478432 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5b6cj" podStartSLOduration=1.988359423 podStartE2EDuration="11.478417544s" podCreationTimestamp="2025-05-08 00:40:41 +0000 UTC" firstStartedPulling="2025-05-08 00:40:41.978189402 +0000 UTC m=+19.947583101" lastFinishedPulling="2025-05-08 00:40:51.468247523 +0000 UTC m=+29.437641222" observedRunningTime="2025-05-08 00:40:52.478137881 +0000 UTC m=+30.447531580" watchObservedRunningTime="2025-05-08 00:40:52.478417544 +0000 UTC m=+30.447811253" May 8 00:40:52.481005 kubelet[2686]: I0508 00:40:52.480908 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804" May 8 00:40:52.482760 containerd[1484]: time="2025-05-08T00:40:52.482663633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:6,}" May 8 00:40:52.483913 containerd[1484]: time="2025-05-08T00:40:52.483736823Z" level=info msg="StopPodSandbox for \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\"" May 8 00:40:52.483913 containerd[1484]: time="2025-05-08T00:40:52.483895405Z" level=info msg="Ensure that sandbox 1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804 in task-service has been cleanup successfully" May 8 00:40:52.484917 containerd[1484]: time="2025-05-08T00:40:52.484898674Z" level=info msg="TearDown network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\" successfully" May 8 00:40:52.485066 containerd[1484]: time="2025-05-08T00:40:52.484995184Z" level=info msg="StopPodSandbox for \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\" returns successfully" May 8 00:40:52.489539 containerd[1484]: time="2025-05-08T00:40:52.489505186Z" level=info msg="StopPodSandbox for \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\"" May 8 00:40:52.489770 containerd[1484]: time="2025-05-08T00:40:52.489686788Z" level=info msg="TearDown network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" successfully" May 8 00:40:52.489770 containerd[1484]: time="2025-05-08T00:40:52.489701408Z" level=info msg="StopPodSandbox for \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" returns successfully" May 8 00:40:52.491354 containerd[1484]: time="2025-05-08T00:40:52.491319643Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\"" May 8 00:40:52.491512 containerd[1484]: time="2025-05-08T00:40:52.491407214Z" level=info msg="TearDown network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" successfully" May 8 00:40:52.491512 containerd[1484]: time="2025-05-08T00:40:52.491418094Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" returns successfully" May 8 00:40:52.491804 containerd[1484]: time="2025-05-08T00:40:52.491777188Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" May 8 00:40:52.491944 containerd[1484]: time="2025-05-08T00:40:52.491855298Z" level=info msg="TearDown network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" successfully" May 8 00:40:52.491944 containerd[1484]: time="2025-05-08T00:40:52.491864779Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" returns successfully" May 8 00:40:52.492938 containerd[1484]: time="2025-05-08T00:40:52.492879268Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:40:52.493020 containerd[1484]: time="2025-05-08T00:40:52.492984399Z" level=info msg="TearDown network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" successfully" May 8 00:40:52.493020 containerd[1484]: time="2025-05-08T00:40:52.492999098Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" returns successfully" May 8 00:40:52.495596 containerd[1484]: time="2025-05-08T00:40:52.495573442Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:40:52.495829 containerd[1484]: time="2025-05-08T00:40:52.495810605Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:40:52.495882 containerd[1484]: time="2025-05-08T00:40:52.495869545Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:40:52.496908 kubelet[2686]: I0508 00:40:52.496886 2686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe" May 8 00:40:52.499346 containerd[1484]: time="2025-05-08T00:40:52.499322377Z" level=info msg="StopPodSandbox for \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\"" May 8 00:40:52.499648 containerd[1484]: time="2025-05-08T00:40:52.499416968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:6,}" May 8 00:40:52.499833 containerd[1484]: time="2025-05-08T00:40:52.499625840Z" level=info msg="Ensure that sandbox 491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe in task-service has been cleanup successfully" May 8 00:40:52.500947 containerd[1484]: time="2025-05-08T00:40:52.500911972Z" level=info msg="TearDown network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\" successfully" May 8 00:40:52.501032 containerd[1484]: time="2025-05-08T00:40:52.501018662Z" level=info msg="StopPodSandbox for \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\" returns successfully" May 8 00:40:52.501408 containerd[1484]: time="2025-05-08T00:40:52.501390846Z" level=info msg="StopPodSandbox for \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\"" May 8 00:40:52.501682 containerd[1484]: time="2025-05-08T00:40:52.501666949Z" level=info msg="TearDown network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" successfully" May 8 00:40:52.501804 containerd[1484]: time="2025-05-08T00:40:52.501787560Z" level=info msg="StopPodSandbox for \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" returns successfully" May 8 00:40:52.502910 containerd[1484]: time="2025-05-08T00:40:52.502891490Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\"" May 8 00:40:52.503587 containerd[1484]: time="2025-05-08T00:40:52.503568316Z" level=info msg="TearDown network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" successfully" May 8 00:40:52.504237 containerd[1484]: time="2025-05-08T00:40:52.504191132Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" returns successfully" May 8 00:40:52.505176 containerd[1484]: time="2025-05-08T00:40:52.505144860Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" May 8 00:40:52.505457 containerd[1484]: time="2025-05-08T00:40:52.505424973Z" level=info msg="TearDown network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" successfully" May 8 00:40:52.505457 containerd[1484]: time="2025-05-08T00:40:52.505445433Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" returns successfully" May 8 00:40:52.511466 containerd[1484]: time="2025-05-08T00:40:52.511446089Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:40:52.513399 containerd[1484]: time="2025-05-08T00:40:52.513348106Z" level=info msg="TearDown network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" successfully" May 8 00:40:52.513470 containerd[1484]: time="2025-05-08T00:40:52.513454487Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" returns successfully" May 8 00:40:52.513876 containerd[1484]: time="2025-05-08T00:40:52.513852522Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:40:52.514043 containerd[1484]: time="2025-05-08T00:40:52.514027992Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:40:52.514127 containerd[1484]: time="2025-05-08T00:40:52.514114293Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:40:52.516752 containerd[1484]: time="2025-05-08T00:40:52.516727688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:6,}" May 8 00:40:52.688103 systemd[1]: run-netns-cni\x2d611104f5\x2d9670\x2ddb68\x2d61f1\x2daa793a3ba1a4.mount: Deactivated successfully. May 8 00:40:52.688227 systemd[1]: run-netns-cni\x2d53f1df00\x2d8f57\x2d3336\x2d9023\x2d37e21f247a61.mount: Deactivated successfully. May 8 00:40:52.688303 systemd[1]: run-netns-cni\x2dc5d20de6\x2d9b0f\x2d9692\x2dff16\x2d6a0c42228b51.mount: Deactivated successfully. May 8 00:40:52.688369 systemd[1]: run-netns-cni\x2dbc0d946f\x2d6f25\x2d886b\x2de729\x2d743e1f70a6b6.mount: Deactivated successfully. May 8 00:40:52.688434 systemd[1]: run-netns-cni\x2df201f5c5\x2d7bf5\x2d3ad0\x2d31e1\x2d9f7a2f2df043.mount: Deactivated successfully. May 8 00:40:52.688497 systemd[1]: run-netns-cni\x2dd19fe923\x2d3bfa\x2df403\x2d0c26\x2d2075ea515a29.mount: Deactivated successfully. May 8 00:40:52.791258 systemd-networkd[1384]: cali3b9248e6221: Link UP May 8 00:40:52.791524 systemd-networkd[1384]: cali3b9248e6221: Gained carrier May 8 00:40:52.814584 systemd-networkd[1384]: cali45ac490a2d2: Link UP May 8 00:40:52.814799 systemd-networkd[1384]: cali45ac490a2d2: Gained carrier May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.565 [INFO][4512] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.590 [INFO][4512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0 calico-kube-controllers-66457cb4b- calico-system 210d2f6d-8bde-4f98-93d8-48808afe079f 687 0 2025-05-08 00:40:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66457cb4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-145-97 calico-kube-controllers-66457cb4b-4cpwk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3b9248e6221 [] []}} ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.590 [INFO][4512] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.668 [INFO][4578] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" HandleID="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Workload="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.695 [INFO][4578] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" HandleID="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Workload="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051f50), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-97", "pod":"calico-kube-controllers-66457cb4b-4cpwk", "timestamp":"2025-05-08 00:40:52.668522099 +0000 UTC"}, Hostname:"172-237-145-97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.696 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.696 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.696 [INFO][4578] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-97' May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.699 [INFO][4578] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.705 [INFO][4578] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.711 [INFO][4578] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.715 [INFO][4578] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.717 [INFO][4578] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.717 [INFO][4578] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.720 [INFO][4578] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613 May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.727 [INFO][4578] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.738 [INFO][4578] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.129/26] block=192.168.87.128/26 handle="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.738 [INFO][4578] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.129/26] handle="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" host="172-237-145-97" May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.738 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:52.830734 containerd[1484]: 2025-05-08 00:40:52.738 [INFO][4578] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.129/26] IPv6=[] ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" HandleID="k8s-pod-network.89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Workload="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" May 8 00:40:52.833797 containerd[1484]: 2025-05-08 00:40:52.766 [INFO][4512] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0", GenerateName:"calico-kube-controllers-66457cb4b-", Namespace:"calico-system", SelfLink:"", UID:"210d2f6d-8bde-4f98-93d8-48808afe079f", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66457cb4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"", Pod:"calico-kube-controllers-66457cb4b-4cpwk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b9248e6221", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.833797 containerd[1484]: 2025-05-08 00:40:52.769 [INFO][4512] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.129/32] ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" May 8 00:40:52.833797 containerd[1484]: 2025-05-08 00:40:52.769 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b9248e6221 ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" May 8 00:40:52.833797 containerd[1484]: 2025-05-08 00:40:52.789 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" May 8 00:40:52.833797 containerd[1484]: 2025-05-08 00:40:52.789 [INFO][4512] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0", GenerateName:"calico-kube-controllers-66457cb4b-", Namespace:"calico-system", SelfLink:"", UID:"210d2f6d-8bde-4f98-93d8-48808afe079f", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66457cb4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613", Pod:"calico-kube-controllers-66457cb4b-4cpwk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b9248e6221", MAC:"2e:25:98:68:fb:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.833797 containerd[1484]: 2025-05-08 00:40:52.809 [INFO][4512] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613" Namespace="calico-system" Pod="calico-kube-controllers-66457cb4b-4cpwk" WorkloadEndpoint="172--237--145--97-k8s-calico--kube--controllers--66457cb4b--4cpwk-eth0" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.579 [INFO][4529] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.598 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0 coredns-7db6d8ff4d- kube-system b968d45f-0186-4bf1-af0b-3789d578367b 686 0 2025-05-08 00:40:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-145-97 coredns-7db6d8ff4d-ndt8s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali45ac490a2d2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.598 [INFO][4529] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.726 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" HandleID="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Workload="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.739 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" HandleID="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Workload="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000512a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-145-97", "pod":"coredns-7db6d8ff4d-ndt8s", "timestamp":"2025-05-08 00:40:52.72380231 +0000 UTC"}, Hostname:"172-237-145-97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.739 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.739 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.740 [INFO][4584] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-97' May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.742 [INFO][4584] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.748 [INFO][4584] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.754 [INFO][4584] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.758 [INFO][4584] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.764 [INFO][4584] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.765 [INFO][4584] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.767 [INFO][4584] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.780 [INFO][4584] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.795 [INFO][4584] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.130/26] block=192.168.87.128/26 handle="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.796 [INFO][4584] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.130/26] handle="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" host="172-237-145-97" May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.798 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:52.839373 containerd[1484]: 2025-05-08 00:40:52.798 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.130/26] IPv6=[] ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" HandleID="k8s-pod-network.04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Workload="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" May 8 00:40:52.839814 containerd[1484]: 2025-05-08 00:40:52.811 [INFO][4529] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b968d45f-0186-4bf1-af0b-3789d578367b", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"", Pod:"coredns-7db6d8ff4d-ndt8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali45ac490a2d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.839814 containerd[1484]: 2025-05-08 00:40:52.811 [INFO][4529] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.130/32] ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" May 8 00:40:52.839814 containerd[1484]: 2025-05-08 00:40:52.811 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45ac490a2d2 ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" May 8 00:40:52.839814 containerd[1484]: 2025-05-08 00:40:52.815 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" May 8 00:40:52.839814 containerd[1484]: 2025-05-08 00:40:52.819 [INFO][4529] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b968d45f-0186-4bf1-af0b-3789d578367b", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f", Pod:"coredns-7db6d8ff4d-ndt8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali45ac490a2d2", MAC:"fe:d0:d6:03:7c:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.839814 containerd[1484]: 2025-05-08 00:40:52.835 [INFO][4529] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ndt8s" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--ndt8s-eth0" May 8 00:40:52.863729 systemd-networkd[1384]: cali39d6ca9206a: Link UP May 8 00:40:52.865042 systemd-networkd[1384]: cali39d6ca9206a: Gained carrier May 8 00:40:52.883920 containerd[1484]: time="2025-05-08T00:40:52.883566585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:52.883920 containerd[1484]: time="2025-05-08T00:40:52.883625796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:52.883920 containerd[1484]: time="2025-05-08T00:40:52.883639416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:52.883920 containerd[1484]: time="2025-05-08T00:40:52.883717496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.539 [INFO][4520] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.591 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0 coredns-7db6d8ff4d- kube-system 1d395b40-74ec-4d21-9505-050a6c6424b9 685 0 2025-05-08 00:40:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-145-97 coredns-7db6d8ff4d-t5sjv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali39d6ca9206a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.591 [INFO][4520] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.724 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" HandleID="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Workload="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.748 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" HandleID="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Workload="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041e640), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-145-97", "pod":"coredns-7db6d8ff4d-t5sjv", "timestamp":"2025-05-08 00:40:52.72490472 +0000 UTC"}, Hostname:"172-237-145-97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.748 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.797 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.797 [INFO][4579] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-97' May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.800 [INFO][4579] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.815 [INFO][4579] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.836 [INFO][4579] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.838 [INFO][4579] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.842 [INFO][4579] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.842 [INFO][4579] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.843 [INFO][4579] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.848 [INFO][4579] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.853 [INFO][4579] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.131/26] block=192.168.87.128/26 handle="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.853 [INFO][4579] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.131/26] handle="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" host="172-237-145-97" May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.853 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:52.894331 containerd[1484]: 2025-05-08 00:40:52.853 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.131/26] IPv6=[] ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" HandleID="k8s-pod-network.8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Workload="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" May 8 00:40:52.894760 containerd[1484]: 2025-05-08 00:40:52.861 [INFO][4520] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1d395b40-74ec-4d21-9505-050a6c6424b9", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"", Pod:"coredns-7db6d8ff4d-t5sjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39d6ca9206a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.894760 containerd[1484]: 2025-05-08 00:40:52.861 [INFO][4520] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.131/32] ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" May 8 00:40:52.894760 containerd[1484]: 2025-05-08 00:40:52.861 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39d6ca9206a ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" May 8 00:40:52.894760 containerd[1484]: 2025-05-08 00:40:52.863 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" May 8 00:40:52.894760 containerd[1484]: 2025-05-08 00:40:52.867 [INFO][4520] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1d395b40-74ec-4d21-9505-050a6c6424b9", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb", Pod:"coredns-7db6d8ff4d-t5sjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39d6ca9206a", MAC:"9e:d3:76:12:57:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.894760 containerd[1484]: 2025-05-08 00:40:52.877 [INFO][4520] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t5sjv" WorkloadEndpoint="172--237--145--97-k8s-coredns--7db6d8ff4d--t5sjv-eth0" May 8 00:40:52.917733 systemd[1]: Started cri-containerd-89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613.scope - libcontainer container 89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613. May 8 00:40:52.924652 systemd-networkd[1384]: cali1d8cba37c27: Link UP May 8 00:40:52.924866 systemd-networkd[1384]: cali1d8cba37c27: Gained carrier May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.603 [INFO][4549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.623 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0 calico-apiserver-95f5468f8- calico-apiserver e43f4851-92c0-4238-8905-f3f57d62dc20 688 0 2025-05-08 00:40:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:95f5468f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-145-97 calico-apiserver-95f5468f8-vgm4z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d8cba37c27 [] []}} ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.623 [INFO][4549] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.754 [INFO][4595] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" HandleID="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Workload="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.771 [INFO][4595] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" HandleID="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Workload="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374d80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-145-97", "pod":"calico-apiserver-95f5468f8-vgm4z", "timestamp":"2025-05-08 00:40:52.753672346 +0000 UTC"}, Hostname:"172-237-145-97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.773 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.853 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.853 [INFO][4595] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-97' May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.856 [INFO][4595] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.865 [INFO][4595] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.882 [INFO][4595] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.885 [INFO][4595] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.889 [INFO][4595] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.889 [INFO][4595] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.891 [INFO][4595] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8 May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.896 [INFO][4595] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.906 [INFO][4595] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.132/26] block=192.168.87.128/26 handle="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.906 [INFO][4595] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.132/26] handle="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" host="172-237-145-97" May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.906 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:52.946414 containerd[1484]: 2025-05-08 00:40:52.906 [INFO][4595] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.132/26] IPv6=[] ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" HandleID="k8s-pod-network.ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Workload="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" May 8 00:40:52.946934 containerd[1484]: 2025-05-08 00:40:52.920 [INFO][4549] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0", GenerateName:"calico-apiserver-95f5468f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e43f4851-92c0-4238-8905-f3f57d62dc20", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95f5468f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"", Pod:"calico-apiserver-95f5468f8-vgm4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d8cba37c27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.946934 containerd[1484]: 2025-05-08 00:40:52.920 [INFO][4549] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.132/32] ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" May 8 00:40:52.946934 containerd[1484]: 2025-05-08 00:40:52.920 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d8cba37c27 ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" May 8 00:40:52.946934 containerd[1484]: 2025-05-08 00:40:52.924 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" May 8 00:40:52.946934 containerd[1484]: 2025-05-08 00:40:52.924 [INFO][4549] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0", GenerateName:"calico-apiserver-95f5468f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e43f4851-92c0-4238-8905-f3f57d62dc20", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95f5468f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8", Pod:"calico-apiserver-95f5468f8-vgm4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d8cba37c27", MAC:"12:64:3b:86:e7:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:52.946934 containerd[1484]: 2025-05-08 00:40:52.939 [INFO][4549] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-vgm4z" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--vgm4z-eth0" May 8 00:40:52.949568 containerd[1484]: time="2025-05-08T00:40:52.949114540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:52.949568 containerd[1484]: time="2025-05-08T00:40:52.949247611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:52.949568 containerd[1484]: time="2025-05-08T00:40:52.949262721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:52.949568 containerd[1484]: time="2025-05-08T00:40:52.949343322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:52.979085 containerd[1484]: time="2025-05-08T00:40:52.977869596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:52.979085 containerd[1484]: time="2025-05-08T00:40:52.977920206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:52.979085 containerd[1484]: time="2025-05-08T00:40:52.977931787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:52.979085 containerd[1484]: time="2025-05-08T00:40:52.977993906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.005233 systemd[1]: Started cri-containerd-8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb.scope - libcontainer container 8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb. May 8 00:40:53.016685 systemd-networkd[1384]: cali1bf1f7a8566: Link UP May 8 00:40:53.018465 systemd-networkd[1384]: cali1bf1f7a8566: Gained carrier May 8 00:40:53.031374 systemd[1]: Started cri-containerd-04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f.scope - libcontainer container 04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f. May 8 00:40:53.062282 systemd-networkd[1384]: cali974829f8577: Link UP May 8 00:40:53.064190 systemd-networkd[1384]: cali974829f8577: Gained carrier May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.618 [INFO][4539] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.653 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0 calico-apiserver-95f5468f8- calico-apiserver a2974be7-7581-4fce-a16e-15f650ba010f 689 0 2025-05-08 00:40:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:95f5468f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-145-97 calico-apiserver-95f5468f8-zknsp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1bf1f7a8566 [] []}} ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.653 [INFO][4539] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.757 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" HandleID="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Workload="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.776 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" HandleID="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Workload="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dd630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-145-97", "pod":"calico-apiserver-95f5468f8-zknsp", "timestamp":"2025-05-08 00:40:52.757140827 +0000 UTC"}, Hostname:"172-237-145-97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.776 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.906 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.906 [INFO][4599] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-97' May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.909 [INFO][4599] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.916 [INFO][4599] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.942 [INFO][4599] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.944 [INFO][4599] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.947 [INFO][4599] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.950 [INFO][4599] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.955 [INFO][4599] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.966 [INFO][4599] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.975 [INFO][4599] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.133/26] block=192.168.87.128/26 handle="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.975 [INFO][4599] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.133/26] handle="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" host="172-237-145-97" May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.976 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:53.068416 containerd[1484]: 2025-05-08 00:40:52.976 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.133/26] IPv6=[] ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" HandleID="k8s-pod-network.c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Workload="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" May 8 00:40:53.068915 containerd[1484]: 2025-05-08 00:40:53.002 [INFO][4539] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0", GenerateName:"calico-apiserver-95f5468f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2974be7-7581-4fce-a16e-15f650ba010f", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95f5468f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"", Pod:"calico-apiserver-95f5468f8-zknsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bf1f7a8566", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:53.068915 containerd[1484]: 2025-05-08 00:40:53.003 [INFO][4539] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.133/32] ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" May 8 00:40:53.068915 containerd[1484]: 2025-05-08 00:40:53.004 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bf1f7a8566 ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" May 8 00:40:53.068915 containerd[1484]: 2025-05-08 00:40:53.016 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" May 8 00:40:53.068915 containerd[1484]: 2025-05-08 00:40:53.018 [INFO][4539] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0", GenerateName:"calico-apiserver-95f5468f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2974be7-7581-4fce-a16e-15f650ba010f", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95f5468f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de", Pod:"calico-apiserver-95f5468f8-zknsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bf1f7a8566", MAC:"2e:07:2c:93:88:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:53.068915 containerd[1484]: 2025-05-08 00:40:53.043 [INFO][4539] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de" Namespace="calico-apiserver" Pod="calico-apiserver-95f5468f8-zknsp" WorkloadEndpoint="172--237--145--97-k8s-calico--apiserver--95f5468f8--zknsp-eth0" May 8 00:40:53.081367 containerd[1484]: time="2025-05-08T00:40:53.080696634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:53.081367 containerd[1484]: time="2025-05-08T00:40:53.080758165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:53.081367 containerd[1484]: time="2025-05-08T00:40:53.080771805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.081367 containerd[1484]: time="2025-05-08T00:40:53.080857876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.101885 containerd[1484]: time="2025-05-08T00:40:53.101850596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66457cb4b-4cpwk,Uid:210d2f6d-8bde-4f98-93d8-48808afe079f,Namespace:calico-system,Attempt:6,} returns sandbox id \"89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613\"" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.660 [INFO][4560] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.714 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--97-k8s-csi--node--driver--q8q6q-eth0 csi-node-driver- calico-system ae949d8a-9850-4b3f-b127-0cc79fb660b3 610 0 2025-05-08 00:40:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-145-97 csi-node-driver-q8q6q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali974829f8577 [] []}} ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.717 [INFO][4560] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.778 [INFO][4612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" HandleID="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Workload="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.793 [INFO][4612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" HandleID="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Workload="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a6090), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-97", "pod":"csi-node-driver-q8q6q", "timestamp":"2025-05-08 00:40:52.778461504 +0000 UTC"}, Hostname:"172-237-145-97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.794 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.976 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.976 [INFO][4612] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-97' May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.979 [INFO][4612] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:52.996 [INFO][4612] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.013 [INFO][4612] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.021 [INFO][4612] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.027 [INFO][4612] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.027 [INFO][4612] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.029 [INFO][4612] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096 May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.036 [INFO][4612] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.043 [INFO][4612] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.134/26] block=192.168.87.128/26 handle="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.043 [INFO][4612] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.134/26] handle="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" host="172-237-145-97" May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.043 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:53.104738 containerd[1484]: 2025-05-08 00:40:53.043 [INFO][4612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.134/26] IPv6=[] ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" HandleID="k8s-pod-network.85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Workload="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" May 8 00:40:53.105190 containerd[1484]: 2025-05-08 00:40:53.053 [INFO][4560] cni-plugin/k8s.go 386: Populated endpoint ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-csi--node--driver--q8q6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae949d8a-9850-4b3f-b127-0cc79fb660b3", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"", Pod:"csi-node-driver-q8q6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali974829f8577", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:53.105190 containerd[1484]: 2025-05-08 00:40:53.054 [INFO][4560] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.134/32] ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" May 8 00:40:53.105190 containerd[1484]: 2025-05-08 00:40:53.054 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali974829f8577 ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" May 8 00:40:53.105190 containerd[1484]: 2025-05-08 00:40:53.066 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" May 8 00:40:53.105190 containerd[1484]: 2025-05-08 00:40:53.067 [INFO][4560] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--97-k8s-csi--node--driver--q8q6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae949d8a-9850-4b3f-b127-0cc79fb660b3", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-97", ContainerID:"85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096", Pod:"csi-node-driver-q8q6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali974829f8577", MAC:"ea:22:47:f7:43:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:53.105190 containerd[1484]: 2025-05-08 00:40:53.089 [INFO][4560] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096" Namespace="calico-system" Pod="csi-node-driver-q8q6q" WorkloadEndpoint="172--237--145--97-k8s-csi--node--driver--q8q6q-eth0" May 8 00:40:53.113674 containerd[1484]: time="2025-05-08T00:40:53.113338855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:40:53.128848 systemd[1]: Started cri-containerd-ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8.scope - libcontainer container ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8. May 8 00:40:53.172072 containerd[1484]: time="2025-05-08T00:40:53.170663397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:53.172072 containerd[1484]: time="2025-05-08T00:40:53.170714968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:53.172072 containerd[1484]: time="2025-05-08T00:40:53.170725218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.172072 containerd[1484]: time="2025-05-08T00:40:53.170786209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.172469 containerd[1484]: time="2025-05-08T00:40:53.172445613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5sjv,Uid:1d395b40-74ec-4d21-9505-050a6c6424b9,Namespace:kube-system,Attempt:4,} returns sandbox id \"8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb\"" May 8 00:40:53.173138 kubelet[2686]: E0508 00:40:53.173123 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:53.176416 containerd[1484]: time="2025-05-08T00:40:53.176373916Z" level=info msg="CreateContainer within sandbox \"8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:53.186491 containerd[1484]: time="2025-05-08T00:40:53.186038549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:53.186491 containerd[1484]: time="2025-05-08T00:40:53.186100950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:53.186491 containerd[1484]: time="2025-05-08T00:40:53.186114820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.186491 containerd[1484]: time="2025-05-08T00:40:53.186195421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:53.247346 containerd[1484]: time="2025-05-08T00:40:53.215732115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndt8s,Uid:b968d45f-0186-4bf1-af0b-3789d578367b,Namespace:kube-system,Attempt:4,} returns sandbox id \"04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f\"" May 8 00:40:53.222258 systemd[1]: Started cri-containerd-c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de.scope - libcontainer container c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de. May 8 00:40:53.247536 kubelet[2686]: E0508 00:40:53.218583 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:53.250307 containerd[1484]: time="2025-05-08T00:40:53.249387644Z" level=info msg="CreateContainer within sandbox \"04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:53.275084 containerd[1484]: time="2025-05-08T00:40:53.274976854Z" level=info msg="CreateContainer within sandbox \"8311ba83d274118f2c20226d7554079e2bba2a111508989bf44a11b49f1f7ccb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1de71abcc123defd1729607b60446219f85df22c2753944151d361f3c04ca6ef\"" May 8 00:40:53.278579 containerd[1484]: time="2025-05-08T00:40:53.278529704Z" level=info msg="StartContainer for \"1de71abcc123defd1729607b60446219f85df22c2753944151d361f3c04ca6ef\"" May 8 00:40:53.287194 systemd[1]: Started cri-containerd-85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096.scope - libcontainer container 85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096. May 8 00:40:53.299503 containerd[1484]: time="2025-05-08T00:40:53.299470724Z" level=info msg="CreateContainer within sandbox \"04e65c2750ea99e871fe968f21a4b579fcfd162b949dbcb6c7513b93c9b23b0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ae708af0a0f49dcad83de73a3450214a80be31b8e6c6313e87edbcca74fa29a\"" May 8 00:40:53.300679 containerd[1484]: time="2025-05-08T00:40:53.300660265Z" level=info msg="StartContainer for \"5ae708af0a0f49dcad83de73a3450214a80be31b8e6c6313e87edbcca74fa29a\"" May 8 00:40:53.344139 systemd[1]: Started cri-containerd-5ae708af0a0f49dcad83de73a3450214a80be31b8e6c6313e87edbcca74fa29a.scope - libcontainer container 5ae708af0a0f49dcad83de73a3450214a80be31b8e6c6313e87edbcca74fa29a. May 8 00:40:53.369349 systemd[1]: Started cri-containerd-1de71abcc123defd1729607b60446219f85df22c2753944151d361f3c04ca6ef.scope - libcontainer container 1de71abcc123defd1729607b60446219f85df22c2753944151d361f3c04ca6ef. May 8 00:40:53.397127 containerd[1484]: time="2025-05-08T00:40:53.396693340Z" level=info msg="StartContainer for \"5ae708af0a0f49dcad83de73a3450214a80be31b8e6c6313e87edbcca74fa29a\" returns successfully" May 8 00:40:53.431240 containerd[1484]: time="2025-05-08T00:40:53.430860854Z" level=info msg="StartContainer for \"1de71abcc123defd1729607b60446219f85df22c2753944151d361f3c04ca6ef\" returns successfully" May 8 00:40:53.536889 containerd[1484]: time="2025-05-08T00:40:53.536337060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-zknsp,Uid:a2974be7-7581-4fce-a16e-15f650ba010f,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de\"" May 8 00:40:53.554380 containerd[1484]: time="2025-05-08T00:40:53.554358245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8q6q,Uid:ae949d8a-9850-4b3f-b127-0cc79fb660b3,Namespace:calico-system,Attempt:6,} returns sandbox id \"85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096\"" May 8 00:40:53.558181 kubelet[2686]: E0508 00:40:53.557706 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:53.584485 kubelet[2686]: I0508 00:40:53.582848 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ndt8s" podStartSLOduration=17.582834599999998 podStartE2EDuration="17.5828346s" podCreationTimestamp="2025-05-08 00:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:53.567021123 +0000 UTC m=+31.536414822" watchObservedRunningTime="2025-05-08 00:40:53.5828346 +0000 UTC m=+31.552228299" May 8 00:40:53.599902 kubelet[2686]: I0508 00:40:53.599886 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:53.600160 kubelet[2686]: E0508 00:40:53.600131 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:53.602636 kubelet[2686]: E0508 00:40:53.602592 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:53.608543 containerd[1484]: time="2025-05-08T00:40:53.608503180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95f5468f8-vgm4z,Uid:e43f4851-92c0-4238-8905-f3f57d62dc20,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8\"" May 8 00:40:53.614483 kubelet[2686]: I0508 00:40:53.614380 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t5sjv" podStartSLOduration=17.61437021 podStartE2EDuration="17.61437021s" podCreationTimestamp="2025-05-08 00:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:53.612682096 +0000 UTC m=+31.582075795" watchObservedRunningTime="2025-05-08 00:40:53.61437021 +0000 UTC m=+31.583763919" May 8 00:40:53.830402 kernel: bpftool[5131]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:40:54.047383 systemd-networkd[1384]: cali1d8cba37c27: Gained IPv6LL May 8 00:40:54.050250 systemd-networkd[1384]: cali3b9248e6221: Gained IPv6LL May 8 00:40:54.069908 systemd-networkd[1384]: vxlan.calico: Link UP May 8 00:40:54.069923 systemd-networkd[1384]: vxlan.calico: Gained carrier May 8 00:40:54.608845 kubelet[2686]: E0508 00:40:54.608816 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:54.609402 kubelet[2686]: E0508 00:40:54.609343 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:54.623337 systemd-networkd[1384]: cali45ac490a2d2: Gained IPv6LL May 8 00:40:54.751343 systemd-networkd[1384]: cali1bf1f7a8566: Gained IPv6LL May 8 00:40:54.880981 systemd-networkd[1384]: cali39d6ca9206a: Gained IPv6LL May 8 00:40:54.881380 systemd-networkd[1384]: cali974829f8577: Gained IPv6LL May 8 00:40:55.327317 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL May 8 00:40:55.611792 kubelet[2686]: E0508 00:40:55.611340 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:55.611792 kubelet[2686]: E0508 00:40:55.611373 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:56.386542 containerd[1484]: time="2025-05-08T00:40:56.386498931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:56.387647 containerd[1484]: time="2025-05-08T00:40:56.387560548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:40:56.387849 containerd[1484]: time="2025-05-08T00:40:56.387827720Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:56.389704 containerd[1484]: time="2025-05-08T00:40:56.389670763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:56.390708 containerd[1484]: time="2025-05-08T00:40:56.390683580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.268138916s" May 8 00:40:56.390755 containerd[1484]: time="2025-05-08T00:40:56.390711600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:40:56.392113 containerd[1484]: time="2025-05-08T00:40:56.392032129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:56.407425 containerd[1484]: time="2025-05-08T00:40:56.407062222Z" level=info msg="CreateContainer within sandbox \"89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:40:56.429145 containerd[1484]: time="2025-05-08T00:40:56.429118565Z" level=info msg="CreateContainer within sandbox \"89e248803626d8ea9452a226f781fae39132906ca46d4e93ecf40d7a118e6613\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8dba6d46ddd6ec695817da835a2ca7c0b20ff38191ff394ad06b220f70ee3015\"" May 8 00:40:56.429688 containerd[1484]: time="2025-05-08T00:40:56.429646949Z" level=info msg="StartContainer for \"8dba6d46ddd6ec695817da835a2ca7c0b20ff38191ff394ad06b220f70ee3015\"" May 8 00:40:56.465352 systemd[1]: Started cri-containerd-8dba6d46ddd6ec695817da835a2ca7c0b20ff38191ff394ad06b220f70ee3015.scope - libcontainer container 8dba6d46ddd6ec695817da835a2ca7c0b20ff38191ff394ad06b220f70ee3015. May 8 00:40:56.507079 containerd[1484]: time="2025-05-08T00:40:56.507030923Z" level=info msg="StartContainer for \"8dba6d46ddd6ec695817da835a2ca7c0b20ff38191ff394ad06b220f70ee3015\" returns successfully" May 8 00:40:56.670730 kubelet[2686]: I0508 00:40:56.670677 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66457cb4b-4cpwk" podStartSLOduration=12.383951249 podStartE2EDuration="15.670661603s" podCreationTimestamp="2025-05-08 00:40:41 +0000 UTC" firstStartedPulling="2025-05-08 00:40:53.10466669 +0000 UTC m=+31.074060389" lastFinishedPulling="2025-05-08 00:40:56.391377034 +0000 UTC m=+34.360770743" observedRunningTime="2025-05-08 00:40:56.627288753 +0000 UTC m=+34.596682472" watchObservedRunningTime="2025-05-08 00:40:56.670661603 +0000 UTC m=+34.640055302" May 8 00:40:58.004324 containerd[1484]: time="2025-05-08T00:40:58.004267975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:58.005292 containerd[1484]: time="2025-05-08T00:40:58.005109469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:40:58.005979 containerd[1484]: time="2025-05-08T00:40:58.005923265Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:58.007783 containerd[1484]: time="2025-05-08T00:40:58.007760676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:58.009003 containerd[1484]: time="2025-05-08T00:40:58.008512820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 1.616446391s" May 8 00:40:58.009003 containerd[1484]: time="2025-05-08T00:40:58.008556740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:58.009613 containerd[1484]: time="2025-05-08T00:40:58.009592437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:40:58.011248 containerd[1484]: time="2025-05-08T00:40:58.011035715Z" level=info msg="CreateContainer within sandbox \"c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:58.027703 containerd[1484]: time="2025-05-08T00:40:58.027586884Z" level=info msg="CreateContainer within sandbox \"c89a322fa4397c313d7e0f64d05294ef891fbf4e5405248108499c5a3aae02de\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8eb19f02504d21365f65d137f0251a3162d100872484ce567a3caa485be56490\"" May 8 00:40:58.028318 containerd[1484]: time="2025-05-08T00:40:58.028262987Z" level=info msg="StartContainer for \"8eb19f02504d21365f65d137f0251a3162d100872484ce567a3caa485be56490\"" May 8 00:40:58.069991 systemd[1]: Started cri-containerd-8eb19f02504d21365f65d137f0251a3162d100872484ce567a3caa485be56490.scope - libcontainer container 8eb19f02504d21365f65d137f0251a3162d100872484ce567a3caa485be56490. May 8 00:40:58.121771 containerd[1484]: time="2025-05-08T00:40:58.121506542Z" level=info msg="StartContainer for \"8eb19f02504d21365f65d137f0251a3162d100872484ce567a3caa485be56490\" returns successfully" May 8 00:40:58.639657 kubelet[2686]: I0508 00:40:58.639411 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-95f5468f8-zknsp" podStartSLOduration=13.174599818 podStartE2EDuration="17.639397072s" podCreationTimestamp="2025-05-08 00:40:41 +0000 UTC" firstStartedPulling="2025-05-08 00:40:53.544628461 +0000 UTC m=+31.514022160" lastFinishedPulling="2025-05-08 00:40:58.009425715 +0000 UTC m=+35.978819414" observedRunningTime="2025-05-08 00:40:58.638351336 +0000 UTC m=+36.607745035" watchObservedRunningTime="2025-05-08 00:40:58.639397072 +0000 UTC m=+36.608790761" May 8 00:40:59.188136 containerd[1484]: time="2025-05-08T00:40:59.188075364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:59.189317 containerd[1484]: time="2025-05-08T00:40:59.189123890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:40:59.190115 containerd[1484]: time="2025-05-08T00:40:59.189825344Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:59.191815 containerd[1484]: time="2025-05-08T00:40:59.191783545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:59.192527 containerd[1484]: time="2025-05-08T00:40:59.192489259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.182810882s" May 8 00:40:59.192572 containerd[1484]: time="2025-05-08T00:40:59.192529359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:40:59.203798 containerd[1484]: time="2025-05-08T00:40:59.203769181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:59.211759 containerd[1484]: time="2025-05-08T00:40:59.211599784Z" level=info msg="CreateContainer within sandbox \"85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:40:59.229173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106792638.mount: Deactivated successfully. May 8 00:40:59.229770 containerd[1484]: time="2025-05-08T00:40:59.229724104Z" level=info msg="CreateContainer within sandbox \"85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a8d6db90eb36cbf67ea5a6ad20aca08b2f02d943692e93d5482e628c1e495bb8\"" May 8 00:40:59.233715 containerd[1484]: time="2025-05-08T00:40:59.233669306Z" level=info msg="StartContainer for \"a8d6db90eb36cbf67ea5a6ad20aca08b2f02d943692e93d5482e628c1e495bb8\"" May 8 00:40:59.275043 systemd[1]: Started cri-containerd-a8d6db90eb36cbf67ea5a6ad20aca08b2f02d943692e93d5482e628c1e495bb8.scope - libcontainer container a8d6db90eb36cbf67ea5a6ad20aca08b2f02d943692e93d5482e628c1e495bb8. May 8 00:40:59.317337 containerd[1484]: time="2025-05-08T00:40:59.317285217Z" level=info msg="StartContainer for \"a8d6db90eb36cbf67ea5a6ad20aca08b2f02d943692e93d5482e628c1e495bb8\" returns successfully" May 8 00:40:59.356591 containerd[1484]: time="2025-05-08T00:40:59.356535193Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:59.357028 containerd[1484]: time="2025-05-08T00:40:59.356985586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:40:59.359351 containerd[1484]: time="2025-05-08T00:40:59.359321228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 155.518667ms" May 8 00:40:59.359413 containerd[1484]: time="2025-05-08T00:40:59.359358349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:59.361949 containerd[1484]: time="2025-05-08T00:40:59.361881893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:40:59.364916 containerd[1484]: time="2025-05-08T00:40:59.364875190Z" level=info msg="CreateContainer within sandbox \"ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:59.380699 containerd[1484]: time="2025-05-08T00:40:59.380082903Z" level=info msg="CreateContainer within sandbox \"ee823bec2cbe8076c64ac6c4c4f4bca15df008e121a8c18ff0d51c186be542b8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ba560b8cec482273e307b3dba6fa3ce5e99efdbceae5f32fc2e15bdec557205a\"" May 8 00:40:59.383124 containerd[1484]: time="2025-05-08T00:40:59.383082949Z" level=info msg="StartContainer for \"ba560b8cec482273e307b3dba6fa3ce5e99efdbceae5f32fc2e15bdec557205a\"" May 8 00:40:59.435354 systemd[1]: Started cri-containerd-ba560b8cec482273e307b3dba6fa3ce5e99efdbceae5f32fc2e15bdec557205a.scope - libcontainer container ba560b8cec482273e307b3dba6fa3ce5e99efdbceae5f32fc2e15bdec557205a. May 8 00:40:59.482326 containerd[1484]: time="2025-05-08T00:40:59.481510532Z" level=info msg="StartContainer for \"ba560b8cec482273e307b3dba6fa3ce5e99efdbceae5f32fc2e15bdec557205a\" returns successfully" May 8 00:40:59.639464 kubelet[2686]: I0508 00:40:59.639342 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:41:00.152859 containerd[1484]: time="2025-05-08T00:41:00.152260439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:41:00.153151 containerd[1484]: time="2025-05-08T00:41:00.153105863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:41:00.153535 containerd[1484]: time="2025-05-08T00:41:00.153505845Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:41:00.155089 containerd[1484]: time="2025-05-08T00:41:00.155051753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:41:00.155568 containerd[1484]: time="2025-05-08T00:41:00.155541506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 793.621043ms" May 8 00:41:00.155623 containerd[1484]: time="2025-05-08T00:41:00.155568866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:41:00.158028 containerd[1484]: time="2025-05-08T00:41:00.158005708Z" level=info msg="CreateContainer within sandbox \"85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:41:00.176276 containerd[1484]: time="2025-05-08T00:41:00.176254071Z" level=info msg="CreateContainer within sandbox \"85daf8a7224bea327daa2df5405a9b40b862363b7afa1271e413c6a94a2f8096\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a9926f6d35a134e3705f55ee3f7811a46f5168d7e36cc1527857d5aadb1ea85a\"" May 8 00:41:00.177288 containerd[1484]: time="2025-05-08T00:41:00.176498043Z" level=info msg="StartContainer for \"a9926f6d35a134e3705f55ee3f7811a46f5168d7e36cc1527857d5aadb1ea85a\"" May 8 00:41:00.205335 systemd[1]: Started cri-containerd-a9926f6d35a134e3705f55ee3f7811a46f5168d7e36cc1527857d5aadb1ea85a.scope - libcontainer container a9926f6d35a134e3705f55ee3f7811a46f5168d7e36cc1527857d5aadb1ea85a. May 8 00:41:00.247082 containerd[1484]: time="2025-05-08T00:41:00.247055033Z" level=info msg="StartContainer for \"a9926f6d35a134e3705f55ee3f7811a46f5168d7e36cc1527857d5aadb1ea85a\" returns successfully" May 8 00:41:00.643890 kubelet[2686]: I0508 00:41:00.643866 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:41:00.657045 kubelet[2686]: I0508 00:41:00.656561 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q8q6q" podStartSLOduration=13.069068494 podStartE2EDuration="19.656549435s" podCreationTimestamp="2025-05-08 00:40:41 +0000 UTC" firstStartedPulling="2025-05-08 00:40:53.568690918 +0000 UTC m=+31.538084617" lastFinishedPulling="2025-05-08 00:41:00.156171859 +0000 UTC m=+38.125565558" observedRunningTime="2025-05-08 00:41:00.655232198 +0000 UTC m=+38.624625897" watchObservedRunningTime="2025-05-08 00:41:00.656549435 +0000 UTC m=+38.625943144" May 8 00:41:00.657045 kubelet[2686]: I0508 00:41:00.656807 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-95f5468f8-vgm4z" podStartSLOduration=13.907298995 podStartE2EDuration="19.656803256s" podCreationTimestamp="2025-05-08 00:40:41 +0000 UTC" firstStartedPulling="2025-05-08 00:40:53.611587177 +0000 UTC m=+31.580980876" lastFinishedPulling="2025-05-08 00:40:59.361091428 +0000 UTC m=+37.330485137" observedRunningTime="2025-05-08 00:40:59.653780132 +0000 UTC m=+37.623173831" watchObservedRunningTime="2025-05-08 00:41:00.656803256 +0000 UTC m=+38.626196955" May 8 00:41:01.207737 kubelet[2686]: I0508 00:41:01.207680 2686 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:41:01.207737 kubelet[2686]: I0508 00:41:01.207738 2686 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:41:09.247778 kubelet[2686]: I0508 00:41:09.246326 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:41:09.247778 kubelet[2686]: E0508 00:41:09.247192 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:09.276051 systemd[1]: run-containerd-runc-k8s.io-7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42-runc.w5Yyjq.mount: Deactivated successfully. May 8 00:41:09.665450 kubelet[2686]: E0508 00:41:09.665420 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:15.627355 kubelet[2686]: I0508 00:41:15.626669 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:41:22.127958 containerd[1484]: time="2025-05-08T00:41:22.127442442Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:41:22.127958 containerd[1484]: time="2025-05-08T00:41:22.127582653Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:41:22.127958 containerd[1484]: time="2025-05-08T00:41:22.127594373Z" level=info msg="StopPodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:41:22.132018 containerd[1484]: time="2025-05-08T00:41:22.129747604Z" level=info msg="RemovePodSandbox for \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:41:22.132018 containerd[1484]: time="2025-05-08T00:41:22.129790434Z" level=info msg="Forcibly stopping sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\"" May 8 00:41:22.132018 containerd[1484]: time="2025-05-08T00:41:22.129889533Z" level=info msg="TearDown network for sandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" successfully" May 8 00:41:22.145495 containerd[1484]: time="2025-05-08T00:41:22.145438311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.145691 containerd[1484]: time="2025-05-08T00:41:22.145548561Z" level=info msg="RemovePodSandbox \"a4401a380cd314ac7ae51dcccbf06bc70b8921839d69ac8b55ad3d10409fc512\" returns successfully" May 8 00:41:22.150952 containerd[1484]: time="2025-05-08T00:41:22.150927193Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:41:22.151526 containerd[1484]: time="2025-05-08T00:41:22.151507744Z" level=info msg="TearDown network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" successfully" May 8 00:41:22.151589 containerd[1484]: time="2025-05-08T00:41:22.151576474Z" level=info msg="StopPodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" returns successfully" May 8 00:41:22.151993 containerd[1484]: time="2025-05-08T00:41:22.151962764Z" level=info msg="RemovePodSandbox for \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:41:22.152042 containerd[1484]: time="2025-05-08T00:41:22.151997724Z" level=info msg="Forcibly stopping sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\"" May 8 00:41:22.152158 containerd[1484]: time="2025-05-08T00:41:22.152099424Z" level=info msg="TearDown network for sandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" successfully" May 8 00:41:22.155887 containerd[1484]: time="2025-05-08T00:41:22.155622806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.155887 containerd[1484]: time="2025-05-08T00:41:22.155671276Z" level=info msg="RemovePodSandbox \"0928142767072a19d67aa758a18a5e66e9b6cd0beb20092eeb83c96aa6bec8b8\" returns successfully" May 8 00:41:22.156359 containerd[1484]: time="2025-05-08T00:41:22.156168966Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" May 8 00:41:22.156359 containerd[1484]: time="2025-05-08T00:41:22.156291076Z" level=info msg="TearDown network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" successfully" May 8 00:41:22.156359 containerd[1484]: time="2025-05-08T00:41:22.156302306Z" level=info msg="StopPodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" returns successfully" May 8 00:41:22.156920 containerd[1484]: time="2025-05-08T00:41:22.156878606Z" level=info msg="RemovePodSandbox for \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" May 8 00:41:22.158225 containerd[1484]: time="2025-05-08T00:41:22.156993466Z" level=info msg="Forcibly stopping sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\"" May 8 00:41:22.158225 containerd[1484]: time="2025-05-08T00:41:22.157071526Z" level=info msg="TearDown network for sandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" successfully" May 8 00:41:22.160195 containerd[1484]: time="2025-05-08T00:41:22.160150998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.160287 containerd[1484]: time="2025-05-08T00:41:22.160235108Z" level=info msg="RemovePodSandbox \"b24804b59f7550a33579c0cf26310cbe31aa450307391cfd683dc6aeace768b1\" returns successfully" May 8 00:41:22.160571 containerd[1484]: time="2025-05-08T00:41:22.160545458Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\"" May 8 00:41:22.160659 containerd[1484]: time="2025-05-08T00:41:22.160637628Z" level=info msg="TearDown network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" successfully" May 8 00:41:22.160659 containerd[1484]: time="2025-05-08T00:41:22.160655598Z" level=info msg="StopPodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" returns successfully" May 8 00:41:22.160999 containerd[1484]: time="2025-05-08T00:41:22.160933738Z" level=info msg="RemovePodSandbox for \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\"" May 8 00:41:22.160999 containerd[1484]: time="2025-05-08T00:41:22.160960568Z" level=info msg="Forcibly stopping sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\"" May 8 00:41:22.161252 containerd[1484]: time="2025-05-08T00:41:22.161026638Z" level=info msg="TearDown network for sandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" successfully" May 8 00:41:22.163860 containerd[1484]: time="2025-05-08T00:41:22.163738570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.163860 containerd[1484]: time="2025-05-08T00:41:22.163798100Z" level=info msg="RemovePodSandbox \"f0296a803419be9d1d07c49fe0f38383fa7b4a6358bd79cd42ebf747d905f976\" returns successfully" May 8 00:41:22.164158 containerd[1484]: time="2025-05-08T00:41:22.164138540Z" level=info msg="StopPodSandbox for \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\"" May 8 00:41:22.164431 containerd[1484]: time="2025-05-08T00:41:22.164389500Z" level=info msg="TearDown network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" successfully" May 8 00:41:22.164431 containerd[1484]: time="2025-05-08T00:41:22.164406890Z" level=info msg="StopPodSandbox for \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" returns successfully" May 8 00:41:22.166311 containerd[1484]: time="2025-05-08T00:41:22.164835451Z" level=info msg="RemovePodSandbox for \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\"" May 8 00:41:22.166311 containerd[1484]: time="2025-05-08T00:41:22.164867411Z" level=info msg="Forcibly stopping sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\"" May 8 00:41:22.166311 containerd[1484]: time="2025-05-08T00:41:22.164942730Z" level=info msg="TearDown network for sandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" successfully" May 8 00:41:22.167866 containerd[1484]: time="2025-05-08T00:41:22.167839132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.167929 containerd[1484]: time="2025-05-08T00:41:22.167878861Z" level=info msg="RemovePodSandbox \"c5ad96dfab1a745068eb1a3a66b475a1725feba97337b2c33ade11c962c3a174\" returns successfully" May 8 00:41:22.168336 containerd[1484]: time="2025-05-08T00:41:22.168294082Z" level=info msg="StopPodSandbox for \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\"" May 8 00:41:22.168566 containerd[1484]: time="2025-05-08T00:41:22.168537672Z" level=info msg="TearDown network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\" successfully" May 8 00:41:22.168566 containerd[1484]: time="2025-05-08T00:41:22.168556012Z" level=info msg="StopPodSandbox for \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\" returns successfully" May 8 00:41:22.168868 containerd[1484]: time="2025-05-08T00:41:22.168827983Z" level=info msg="RemovePodSandbox for \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\"" May 8 00:41:22.168868 containerd[1484]: time="2025-05-08T00:41:22.168853473Z" level=info msg="Forcibly stopping sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\"" May 8 00:41:22.168978 containerd[1484]: time="2025-05-08T00:41:22.168917962Z" level=info msg="TearDown network for sandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\" successfully" May 8 00:41:22.172770 containerd[1484]: time="2025-05-08T00:41:22.171660394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.172770 containerd[1484]: time="2025-05-08T00:41:22.171706964Z" level=info msg="RemovePodSandbox \"491563f13a9bd95b4394d1b2dd8037bc2e59325d908479d3a7435078526c88fe\" returns successfully" May 8 00:41:22.174573 containerd[1484]: time="2025-05-08T00:41:22.174549675Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" May 8 00:41:22.174737 containerd[1484]: time="2025-05-08T00:41:22.174720115Z" level=info msg="TearDown network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" successfully" May 8 00:41:22.174847 containerd[1484]: time="2025-05-08T00:41:22.174773445Z" level=info msg="StopPodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" returns successfully" May 8 00:41:22.175107 containerd[1484]: time="2025-05-08T00:41:22.175086735Z" level=info msg="RemovePodSandbox for \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" May 8 00:41:22.175509 containerd[1484]: time="2025-05-08T00:41:22.175490625Z" level=info msg="Forcibly stopping sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\"" May 8 00:41:22.176345 containerd[1484]: time="2025-05-08T00:41:22.176308356Z" level=info msg="TearDown network for sandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" successfully" May 8 00:41:22.180060 containerd[1484]: time="2025-05-08T00:41:22.180009887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.180111 containerd[1484]: time="2025-05-08T00:41:22.180078277Z" level=info msg="RemovePodSandbox \"1a5ec6334fbdcfbfe62e19da00bca8aef58ac57622a4388aeed2a44d18bc94a8\" returns successfully" May 8 00:41:22.180432 containerd[1484]: time="2025-05-08T00:41:22.180396818Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\"" May 8 00:41:22.180534 containerd[1484]: time="2025-05-08T00:41:22.180500718Z" level=info msg="TearDown network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" successfully" May 8 00:41:22.180534 containerd[1484]: time="2025-05-08T00:41:22.180522728Z" level=info msg="StopPodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" returns successfully" May 8 00:41:22.180819 containerd[1484]: time="2025-05-08T00:41:22.180797938Z" level=info msg="RemovePodSandbox for \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\"" May 8 00:41:22.180911 containerd[1484]: time="2025-05-08T00:41:22.180885047Z" level=info msg="Forcibly stopping sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\"" May 8 00:41:22.180996 containerd[1484]: time="2025-05-08T00:41:22.180962767Z" level=info msg="TearDown network for sandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" successfully" May 8 00:41:22.183781 containerd[1484]: time="2025-05-08T00:41:22.183668849Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.183781 containerd[1484]: time="2025-05-08T00:41:22.183718770Z" level=info msg="RemovePodSandbox \"254038beb8ae5d079f77bb4cc30649336b0ab3f4ded1b86b5b65e35d4a85d7c4\" returns successfully" May 8 00:41:22.184241 containerd[1484]: time="2025-05-08T00:41:22.184189809Z" level=info msg="StopPodSandbox for \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\"" May 8 00:41:22.184328 containerd[1484]: time="2025-05-08T00:41:22.184306029Z" level=info msg="TearDown network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" successfully" May 8 00:41:22.184328 containerd[1484]: time="2025-05-08T00:41:22.184323759Z" level=info msg="StopPodSandbox for \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" returns successfully" May 8 00:41:22.184596 containerd[1484]: time="2025-05-08T00:41:22.184576120Z" level=info msg="RemovePodSandbox for \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\"" May 8 00:41:22.184635 containerd[1484]: time="2025-05-08T00:41:22.184597830Z" level=info msg="Forcibly stopping sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\"" May 8 00:41:22.184707 containerd[1484]: time="2025-05-08T00:41:22.184674010Z" level=info msg="TearDown network for sandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" successfully" May 8 00:41:22.188038 containerd[1484]: time="2025-05-08T00:41:22.187920331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.188038 containerd[1484]: time="2025-05-08T00:41:22.187976961Z" level=info msg="RemovePodSandbox \"ec6ff8a70cbba35fa6e95e096678c2bdd0d7d5f80a4f60615af90b7503e5e0cc\" returns successfully" May 8 00:41:22.188605 containerd[1484]: time="2025-05-08T00:41:22.188559882Z" level=info msg="StopPodSandbox for \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\"" May 8 00:41:22.188657 containerd[1484]: time="2025-05-08T00:41:22.188646202Z" level=info msg="TearDown network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\" successfully" May 8 00:41:22.188712 containerd[1484]: time="2025-05-08T00:41:22.188656852Z" level=info msg="StopPodSandbox for \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\" returns successfully" May 8 00:41:22.189256 containerd[1484]: time="2025-05-08T00:41:22.189231502Z" level=info msg="RemovePodSandbox for \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\"" May 8 00:41:22.189304 containerd[1484]: time="2025-05-08T00:41:22.189256572Z" level=info msg="Forcibly stopping sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\"" May 8 00:41:22.189374 containerd[1484]: time="2025-05-08T00:41:22.189339782Z" level=info msg="TearDown network for sandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\" successfully" May 8 00:41:22.194370 containerd[1484]: time="2025-05-08T00:41:22.194253944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.194370 containerd[1484]: time="2025-05-08T00:41:22.194301664Z" level=info msg="RemovePodSandbox \"0f2de833dfefbd5720866b0bc3d59de40dcc8bf98d512cffff4abe4f136357df\" returns successfully" May 8 00:41:22.194578 containerd[1484]: time="2025-05-08T00:41:22.194529304Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" May 8 00:41:22.194654 containerd[1484]: time="2025-05-08T00:41:22.194617865Z" level=info msg="TearDown network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" successfully" May 8 00:41:22.194654 containerd[1484]: time="2025-05-08T00:41:22.194633875Z" level=info msg="StopPodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" returns successfully" May 8 00:41:22.194986 containerd[1484]: time="2025-05-08T00:41:22.194964924Z" level=info msg="RemovePodSandbox for \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" May 8 00:41:22.194986 containerd[1484]: time="2025-05-08T00:41:22.194986104Z" level=info msg="Forcibly stopping sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\"" May 8 00:41:22.195082 containerd[1484]: time="2025-05-08T00:41:22.195049874Z" level=info msg="TearDown network for sandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" successfully" May 8 00:41:22.198067 containerd[1484]: time="2025-05-08T00:41:22.198036146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.198127 containerd[1484]: time="2025-05-08T00:41:22.198076386Z" level=info msg="RemovePodSandbox \"7cea920b458187df49fda92091eea31ad48f4aca2c9ceb6a5427ed042164e7e5\" returns successfully" May 8 00:41:22.198349 containerd[1484]: time="2025-05-08T00:41:22.198307066Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\"" May 8 00:41:22.198410 containerd[1484]: time="2025-05-08T00:41:22.198389836Z" level=info msg="TearDown network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" successfully" May 8 00:41:22.198410 containerd[1484]: time="2025-05-08T00:41:22.198405886Z" level=info msg="StopPodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" returns successfully" May 8 00:41:22.198676 containerd[1484]: time="2025-05-08T00:41:22.198653297Z" level=info msg="RemovePodSandbox for \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\"" May 8 00:41:22.198711 containerd[1484]: time="2025-05-08T00:41:22.198675627Z" level=info msg="Forcibly stopping sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\"" May 8 00:41:22.198765 containerd[1484]: time="2025-05-08T00:41:22.198736207Z" level=info msg="TearDown network for sandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" successfully" May 8 00:41:22.201382 containerd[1484]: time="2025-05-08T00:41:22.201347078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.201440 containerd[1484]: time="2025-05-08T00:41:22.201384158Z" level=info msg="RemovePodSandbox \"0363d114f0850147b7c02766128fd397cd3e23f0fff675e538955447bd7de052\" returns successfully" May 8 00:41:22.201781 containerd[1484]: time="2025-05-08T00:41:22.201631048Z" level=info msg="StopPodSandbox for \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\"" May 8 00:41:22.201781 containerd[1484]: time="2025-05-08T00:41:22.201721338Z" level=info msg="TearDown network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" successfully" May 8 00:41:22.201781 containerd[1484]: time="2025-05-08T00:41:22.201732668Z" level=info msg="StopPodSandbox for \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" returns successfully" May 8 00:41:22.201991 containerd[1484]: time="2025-05-08T00:41:22.201966527Z" level=info msg="RemovePodSandbox for \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\"" May 8 00:41:22.202037 containerd[1484]: time="2025-05-08T00:41:22.201993418Z" level=info msg="Forcibly stopping sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\"" May 8 00:41:22.202096 containerd[1484]: time="2025-05-08T00:41:22.202063018Z" level=info msg="TearDown network for sandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" successfully" May 8 00:41:22.204774 containerd[1484]: time="2025-05-08T00:41:22.204740290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.204873 containerd[1484]: time="2025-05-08T00:41:22.204779490Z" level=info msg="RemovePodSandbox \"bde47cd3e6359058f3595ac8dfbfadfccd92ce7798780782030abd85842d656a\" returns successfully" May 8 00:41:22.205032 containerd[1484]: time="2025-05-08T00:41:22.204998909Z" level=info msg="StopPodSandbox for \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\"" May 8 00:41:22.205107 containerd[1484]: time="2025-05-08T00:41:22.205086279Z" level=info msg="TearDown network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\" successfully" May 8 00:41:22.205107 containerd[1484]: time="2025-05-08T00:41:22.205102399Z" level=info msg="StopPodSandbox for \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\" returns successfully" May 8 00:41:22.205357 containerd[1484]: time="2025-05-08T00:41:22.205333839Z" level=info msg="RemovePodSandbox for \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\"" May 8 00:41:22.205357 containerd[1484]: time="2025-05-08T00:41:22.205357519Z" level=info msg="Forcibly stopping sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\"" May 8 00:41:22.205440 containerd[1484]: time="2025-05-08T00:41:22.205416070Z" level=info msg="TearDown network for sandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\" successfully" May 8 00:41:22.207995 containerd[1484]: time="2025-05-08T00:41:22.207961650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.208058 containerd[1484]: time="2025-05-08T00:41:22.208001500Z" level=info msg="RemovePodSandbox \"16f7dd20a2911fa3a110c8ca4f086482370f91dd219076c74240a3773db1996c\" returns successfully" May 8 00:41:22.208305 containerd[1484]: time="2025-05-08T00:41:22.208272041Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:41:22.208369 containerd[1484]: time="2025-05-08T00:41:22.208349481Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:41:22.208369 containerd[1484]: time="2025-05-08T00:41:22.208364721Z" level=info msg="StopPodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:41:22.208579 containerd[1484]: time="2025-05-08T00:41:22.208552501Z" level=info msg="RemovePodSandbox for \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:41:22.208579 containerd[1484]: time="2025-05-08T00:41:22.208574391Z" level=info msg="Forcibly stopping sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\"" May 8 00:41:22.208669 containerd[1484]: time="2025-05-08T00:41:22.208636091Z" level=info msg="TearDown network for sandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" successfully" May 8 00:41:22.211171 containerd[1484]: time="2025-05-08T00:41:22.211136762Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.211266 containerd[1484]: time="2025-05-08T00:41:22.211172202Z" level=info msg="RemovePodSandbox \"c64bfccbe124a798c02287753e3b5e5121bd1f1345ee028f9df18b3a54ec3af5\" returns successfully" May 8 00:41:22.211639 containerd[1484]: time="2025-05-08T00:41:22.211458712Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:41:22.211639 containerd[1484]: time="2025-05-08T00:41:22.211562203Z" level=info msg="TearDown network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" successfully" May 8 00:41:22.211639 containerd[1484]: time="2025-05-08T00:41:22.211573773Z" level=info msg="StopPodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" returns successfully" May 8 00:41:22.211833 containerd[1484]: time="2025-05-08T00:41:22.211806193Z" level=info msg="RemovePodSandbox for \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:41:22.211861 containerd[1484]: time="2025-05-08T00:41:22.211834103Z" level=info msg="Forcibly stopping sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\"" May 8 00:41:22.211943 containerd[1484]: time="2025-05-08T00:41:22.211901522Z" level=info msg="TearDown network for sandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" successfully" May 8 00:41:22.214591 containerd[1484]: time="2025-05-08T00:41:22.214536354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.214591 containerd[1484]: time="2025-05-08T00:41:22.214591714Z" level=info msg="RemovePodSandbox \"4a292d438a391d4d76516ee5741b306a42ee6852da94d7b2787ec9ad2c8d3ae2\" returns successfully" May 8 00:41:22.214904 containerd[1484]: time="2025-05-08T00:41:22.214872874Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" May 8 00:41:22.215123 containerd[1484]: time="2025-05-08T00:41:22.215071684Z" level=info msg="TearDown network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" successfully" May 8 00:41:22.215123 containerd[1484]: time="2025-05-08T00:41:22.215092044Z" level=info msg="StopPodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" returns successfully" May 8 00:41:22.215493 containerd[1484]: time="2025-05-08T00:41:22.215456534Z" level=info msg="RemovePodSandbox for \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" May 8 00:41:22.215493 containerd[1484]: time="2025-05-08T00:41:22.215486574Z" level=info msg="Forcibly stopping sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\"" May 8 00:41:22.215576 containerd[1484]: time="2025-05-08T00:41:22.215554905Z" level=info msg="TearDown network for sandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" successfully" May 8 00:41:22.218261 containerd[1484]: time="2025-05-08T00:41:22.218180525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.218261 containerd[1484]: time="2025-05-08T00:41:22.218249475Z" level=info msg="RemovePodSandbox \"7215bbf85862b33af582b682e67e348d877731a8d7124c168f8bf683231e2b75\" returns successfully" May 8 00:41:22.218652 containerd[1484]: time="2025-05-08T00:41:22.218516336Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\"" May 8 00:41:22.218652 containerd[1484]: time="2025-05-08T00:41:22.218593076Z" level=info msg="TearDown network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" successfully" May 8 00:41:22.218652 containerd[1484]: time="2025-05-08T00:41:22.218603066Z" level=info msg="StopPodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" returns successfully" May 8 00:41:22.218845 containerd[1484]: time="2025-05-08T00:41:22.218819736Z" level=info msg="RemovePodSandbox for \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\"" May 8 00:41:22.218884 containerd[1484]: time="2025-05-08T00:41:22.218848686Z" level=info msg="Forcibly stopping sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\"" May 8 00:41:22.218956 containerd[1484]: time="2025-05-08T00:41:22.218922225Z" level=info msg="TearDown network for sandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" successfully" May 8 00:41:22.223594 containerd[1484]: time="2025-05-08T00:41:22.223477098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.223594 containerd[1484]: time="2025-05-08T00:41:22.223519968Z" level=info msg="RemovePodSandbox \"630007c92a4d1ebcdf28586d606b3e4f216eec5ec510cadc3cc9d562df67cf0a\" returns successfully" May 8 00:41:22.224403 containerd[1484]: time="2025-05-08T00:41:22.224331318Z" level=info msg="StopPodSandbox for \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\"" May 8 00:41:22.224700 containerd[1484]: time="2025-05-08T00:41:22.224420109Z" level=info msg="TearDown network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" successfully" May 8 00:41:22.224700 containerd[1484]: time="2025-05-08T00:41:22.224431319Z" level=info msg="StopPodSandbox for \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" returns successfully" May 8 00:41:22.225154 containerd[1484]: time="2025-05-08T00:41:22.225128009Z" level=info msg="RemovePodSandbox for \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\"" May 8 00:41:22.225154 containerd[1484]: time="2025-05-08T00:41:22.225153109Z" level=info msg="Forcibly stopping sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\"" May 8 00:41:22.225349 containerd[1484]: time="2025-05-08T00:41:22.225235339Z" level=info msg="TearDown network for sandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" successfully" May 8 00:41:22.237317 containerd[1484]: time="2025-05-08T00:41:22.237280125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.237364 containerd[1484]: time="2025-05-08T00:41:22.237330065Z" level=info msg="RemovePodSandbox \"6dfad420ea9956b0a4b0082dddb85016cd6d1d8d863452fcd9fd2431dcb13dbd\" returns successfully" May 8 00:41:22.237703 containerd[1484]: time="2025-05-08T00:41:22.237669315Z" level=info msg="StopPodSandbox for \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\"" May 8 00:41:22.237782 containerd[1484]: time="2025-05-08T00:41:22.237754125Z" level=info msg="TearDown network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\" successfully" May 8 00:41:22.237782 containerd[1484]: time="2025-05-08T00:41:22.237773605Z" level=info msg="StopPodSandbox for \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\" returns successfully" May 8 00:41:22.238488 containerd[1484]: time="2025-05-08T00:41:22.238454145Z" level=info msg="RemovePodSandbox for \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\"" May 8 00:41:22.238488 containerd[1484]: time="2025-05-08T00:41:22.238481335Z" level=info msg="Forcibly stopping sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\"" May 8 00:41:22.238598 containerd[1484]: time="2025-05-08T00:41:22.238557346Z" level=info msg="TearDown network for sandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\" successfully" May 8 00:41:22.242869 containerd[1484]: time="2025-05-08T00:41:22.242827548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.242925 containerd[1484]: time="2025-05-08T00:41:22.242900157Z" level=info msg="RemovePodSandbox \"d7068f82f3e53ecbf49d46a9f7e05e37c404556af594a559eafcca79e77513f2\" returns successfully" May 8 00:41:22.243303 containerd[1484]: time="2025-05-08T00:41:22.243271868Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:41:22.243425 containerd[1484]: time="2025-05-08T00:41:22.243395348Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:41:22.243425 containerd[1484]: time="2025-05-08T00:41:22.243415168Z" level=info msg="StopPodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:41:22.243770 containerd[1484]: time="2025-05-08T00:41:22.243729248Z" level=info msg="RemovePodSandbox for \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:41:22.243770 containerd[1484]: time="2025-05-08T00:41:22.243762648Z" level=info msg="Forcibly stopping sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\"" May 8 00:41:22.244335 containerd[1484]: time="2025-05-08T00:41:22.243843888Z" level=info msg="TearDown network for sandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" successfully" May 8 00:41:22.246378 containerd[1484]: time="2025-05-08T00:41:22.246340289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.246423 containerd[1484]: time="2025-05-08T00:41:22.246381769Z" level=info msg="RemovePodSandbox \"49f80a8a8f2af02bfb716a5a090128d20bf2a53022e72925bb6f8903b9b92159\" returns successfully" May 8 00:41:22.246646 containerd[1484]: time="2025-05-08T00:41:22.246612829Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:41:22.246716 containerd[1484]: time="2025-05-08T00:41:22.246703030Z" level=info msg="TearDown network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" successfully" May 8 00:41:22.246753 containerd[1484]: time="2025-05-08T00:41:22.246714680Z" level=info msg="StopPodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" returns successfully" May 8 00:41:22.247235 containerd[1484]: time="2025-05-08T00:41:22.246978029Z" level=info msg="RemovePodSandbox for \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:41:22.247235 containerd[1484]: time="2025-05-08T00:41:22.247001669Z" level=info msg="Forcibly stopping sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\"" May 8 00:41:22.247235 containerd[1484]: time="2025-05-08T00:41:22.247061879Z" level=info msg="TearDown network for sandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" successfully" May 8 00:41:22.249672 containerd[1484]: time="2025-05-08T00:41:22.249633541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.249749 containerd[1484]: time="2025-05-08T00:41:22.249691891Z" level=info msg="RemovePodSandbox \"0595de19b9e7a5f6156d9ac9bf8b10cf51a1b8e40507c3de1b99803aef4e4c6b\" returns successfully" May 8 00:41:22.250261 containerd[1484]: time="2025-05-08T00:41:22.249997980Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" May 8 00:41:22.250261 containerd[1484]: time="2025-05-08T00:41:22.250105011Z" level=info msg="TearDown network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" successfully" May 8 00:41:22.250261 containerd[1484]: time="2025-05-08T00:41:22.250133951Z" level=info msg="StopPodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" returns successfully" May 8 00:41:22.250508 containerd[1484]: time="2025-05-08T00:41:22.250468761Z" level=info msg="RemovePodSandbox for \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" May 8 00:41:22.250667 containerd[1484]: time="2025-05-08T00:41:22.250605251Z" level=info msg="Forcibly stopping sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\"" May 8 00:41:22.250701 containerd[1484]: time="2025-05-08T00:41:22.250676751Z" level=info msg="TearDown network for sandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" successfully" May 8 00:41:22.253439 containerd[1484]: time="2025-05-08T00:41:22.253402252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.253511 containerd[1484]: time="2025-05-08T00:41:22.253468133Z" level=info msg="RemovePodSandbox \"9937260851b563393977e3e75f5403e25c4987dfae8dbf9b01dbb4ee076a77d5\" returns successfully" May 8 00:41:22.253806 containerd[1484]: time="2025-05-08T00:41:22.253749433Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\"" May 8 00:41:22.253844 containerd[1484]: time="2025-05-08T00:41:22.253826703Z" level=info msg="TearDown network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" successfully" May 8 00:41:22.253844 containerd[1484]: time="2025-05-08T00:41:22.253837763Z" level=info msg="StopPodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" returns successfully" May 8 00:41:22.254174 containerd[1484]: time="2025-05-08T00:41:22.254066402Z" level=info msg="RemovePodSandbox for \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\"" May 8 00:41:22.254174 containerd[1484]: time="2025-05-08T00:41:22.254122323Z" level=info msg="Forcibly stopping sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\"" May 8 00:41:22.254300 containerd[1484]: time="2025-05-08T00:41:22.254263033Z" level=info msg="TearDown network for sandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" successfully" May 8 00:41:22.256621 containerd[1484]: time="2025-05-08T00:41:22.256584924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.256654 containerd[1484]: time="2025-05-08T00:41:22.256622844Z" level=info msg="RemovePodSandbox \"39c46626994dee89b35579716b77678b6012b531c8188c994c37595bb6d31d93\" returns successfully" May 8 00:41:22.256914 containerd[1484]: time="2025-05-08T00:41:22.256881154Z" level=info msg="StopPodSandbox for \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\"" May 8 00:41:22.256983 containerd[1484]: time="2025-05-08T00:41:22.256960274Z" level=info msg="TearDown network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" successfully" May 8 00:41:22.257018 containerd[1484]: time="2025-05-08T00:41:22.256985164Z" level=info msg="StopPodSandbox for \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" returns successfully" May 8 00:41:22.257404 containerd[1484]: time="2025-05-08T00:41:22.257347044Z" level=info msg="RemovePodSandbox for \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\"" May 8 00:41:22.257404 containerd[1484]: time="2025-05-08T00:41:22.257385354Z" level=info msg="Forcibly stopping sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\"" May 8 00:41:22.257558 containerd[1484]: time="2025-05-08T00:41:22.257500884Z" level=info msg="TearDown network for sandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" successfully" May 8 00:41:22.259956 containerd[1484]: time="2025-05-08T00:41:22.259917895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.259991 containerd[1484]: time="2025-05-08T00:41:22.259976395Z" level=info msg="RemovePodSandbox \"e33b4a4dc6d80c3a1fde4dc5c0d78c0176524e85e8d04cad28e813b7e768ec5b\" returns successfully" May 8 00:41:22.260381 containerd[1484]: time="2025-05-08T00:41:22.260357306Z" level=info msg="StopPodSandbox for \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\"" May 8 00:41:22.260621 containerd[1484]: time="2025-05-08T00:41:22.260520776Z" level=info msg="TearDown network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\" successfully" May 8 00:41:22.260621 containerd[1484]: time="2025-05-08T00:41:22.260536156Z" level=info msg="StopPodSandbox for \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\" returns successfully" May 8 00:41:22.260839 containerd[1484]: time="2025-05-08T00:41:22.260813606Z" level=info msg="RemovePodSandbox for \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\"" May 8 00:41:22.260870 containerd[1484]: time="2025-05-08T00:41:22.260842186Z" level=info msg="Forcibly stopping sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\"" May 8 00:41:22.260951 containerd[1484]: time="2025-05-08T00:41:22.260911896Z" level=info msg="TearDown network for sandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\" successfully" May 8 00:41:22.263518 containerd[1484]: time="2025-05-08T00:41:22.263490937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.263594 containerd[1484]: time="2025-05-08T00:41:22.263526527Z" level=info msg="RemovePodSandbox \"1ca2b85783fd6bd4f652f2f90f40057742a83347d2d35c8abcb30eaf7f454804\" returns successfully" May 8 00:41:22.263855 containerd[1484]: time="2025-05-08T00:41:22.263831508Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:41:22.263937 containerd[1484]: time="2025-05-08T00:41:22.263915907Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:41:22.263937 containerd[1484]: time="2025-05-08T00:41:22.263933817Z" level=info msg="StopPodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:41:22.264254 containerd[1484]: time="2025-05-08T00:41:22.264151587Z" level=info msg="RemovePodSandbox for \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:41:22.264254 containerd[1484]: time="2025-05-08T00:41:22.264222757Z" level=info msg="Forcibly stopping sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\"" May 8 00:41:22.264526 containerd[1484]: time="2025-05-08T00:41:22.264307778Z" level=info msg="TearDown network for sandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" successfully" May 8 00:41:22.266902 containerd[1484]: time="2025-05-08T00:41:22.266875748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.266963 containerd[1484]: time="2025-05-08T00:41:22.266907848Z" level=info msg="RemovePodSandbox \"175ac8f74c1681576b47b0138ee41ecea26c62242492d12e705012e6dee1fc51\" returns successfully" May 8 00:41:22.267191 containerd[1484]: time="2025-05-08T00:41:22.267170679Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:41:22.267411 containerd[1484]: time="2025-05-08T00:41:22.267328599Z" level=info msg="TearDown network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" successfully" May 8 00:41:22.267411 containerd[1484]: time="2025-05-08T00:41:22.267343199Z" level=info msg="StopPodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" returns successfully" May 8 00:41:22.267615 containerd[1484]: time="2025-05-08T00:41:22.267555599Z" level=info msg="RemovePodSandbox for \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:41:22.267615 containerd[1484]: time="2025-05-08T00:41:22.267580049Z" level=info msg="Forcibly stopping sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\"" May 8 00:41:22.267688 containerd[1484]: time="2025-05-08T00:41:22.267645599Z" level=info msg="TearDown network for sandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" successfully" May 8 00:41:22.272094 containerd[1484]: time="2025-05-08T00:41:22.271317661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.272094 containerd[1484]: time="2025-05-08T00:41:22.271407621Z" level=info msg="RemovePodSandbox \"fa2bb172dd330b6df51064d4646e3b96a91042c9670541607149f58654eb1617\" returns successfully" May 8 00:41:22.272094 containerd[1484]: time="2025-05-08T00:41:22.271674051Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" May 8 00:41:22.272094 containerd[1484]: time="2025-05-08T00:41:22.271763482Z" level=info msg="TearDown network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" successfully" May 8 00:41:22.272094 containerd[1484]: time="2025-05-08T00:41:22.271780302Z" level=info msg="StopPodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" returns successfully" May 8 00:41:22.272531 containerd[1484]: time="2025-05-08T00:41:22.272509842Z" level=info msg="RemovePodSandbox for \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" May 8 00:41:22.272598 containerd[1484]: time="2025-05-08T00:41:22.272579392Z" level=info msg="Forcibly stopping sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\"" May 8 00:41:22.272774 containerd[1484]: time="2025-05-08T00:41:22.272751602Z" level=info msg="TearDown network for sandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" successfully" May 8 00:41:22.277372 containerd[1484]: time="2025-05-08T00:41:22.277333354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.277490 containerd[1484]: time="2025-05-08T00:41:22.277473744Z" level=info msg="RemovePodSandbox \"8cae29a8a2767ed20610c206868847aaaf1eeec42ad83cc21a9adb569e1822b1\" returns successfully" May 8 00:41:22.277858 containerd[1484]: time="2025-05-08T00:41:22.277840975Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\"" May 8 00:41:22.278010 containerd[1484]: time="2025-05-08T00:41:22.277994434Z" level=info msg="TearDown network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" successfully" May 8 00:41:22.278070 containerd[1484]: time="2025-05-08T00:41:22.278052864Z" level=info msg="StopPodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" returns successfully" May 8 00:41:22.278439 containerd[1484]: time="2025-05-08T00:41:22.278409524Z" level=info msg="RemovePodSandbox for \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\"" May 8 00:41:22.280223 containerd[1484]: time="2025-05-08T00:41:22.278699495Z" level=info msg="Forcibly stopping sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\"" May 8 00:41:22.280223 containerd[1484]: time="2025-05-08T00:41:22.278813735Z" level=info msg="TearDown network for sandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" successfully" May 8 00:41:22.281416 containerd[1484]: time="2025-05-08T00:41:22.281384986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.281457 containerd[1484]: time="2025-05-08T00:41:22.281442536Z" level=info msg="RemovePodSandbox \"0eefbbe621b98481a6bdbdf5f700a1b6a475fd2b2417eb76c79d6968bd1144ac\" returns successfully" May 8 00:41:22.281799 containerd[1484]: time="2025-05-08T00:41:22.281774906Z" level=info msg="StopPodSandbox for \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\"" May 8 00:41:22.281906 containerd[1484]: time="2025-05-08T00:41:22.281860786Z" level=info msg="TearDown network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" successfully" May 8 00:41:22.281906 containerd[1484]: time="2025-05-08T00:41:22.281902626Z" level=info msg="StopPodSandbox for \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" returns successfully" May 8 00:41:22.282255 containerd[1484]: time="2025-05-08T00:41:22.282195246Z" level=info msg="RemovePodSandbox for \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\"" May 8 00:41:22.282299 containerd[1484]: time="2025-05-08T00:41:22.282255966Z" level=info msg="Forcibly stopping sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\"" May 8 00:41:22.282346 containerd[1484]: time="2025-05-08T00:41:22.282325776Z" level=info msg="TearDown network for sandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" successfully" May 8 00:41:22.284885 containerd[1484]: time="2025-05-08T00:41:22.284850628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.284949 containerd[1484]: time="2025-05-08T00:41:22.284903907Z" level=info msg="RemovePodSandbox \"9a513d4cdea6bdcabf13080377258c160ae6431d59e81315315ae40bcd292f7c\" returns successfully" May 8 00:41:22.285203 containerd[1484]: time="2025-05-08T00:41:22.285158307Z" level=info msg="StopPodSandbox for \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\"" May 8 00:41:22.285360 containerd[1484]: time="2025-05-08T00:41:22.285282208Z" level=info msg="TearDown network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\" successfully" May 8 00:41:22.285360 containerd[1484]: time="2025-05-08T00:41:22.285300488Z" level=info msg="StopPodSandbox for \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\" returns successfully" May 8 00:41:22.286049 containerd[1484]: time="2025-05-08T00:41:22.285657288Z" level=info msg="RemovePodSandbox for \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\"" May 8 00:41:22.286049 containerd[1484]: time="2025-05-08T00:41:22.285675168Z" level=info msg="Forcibly stopping sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\"" May 8 00:41:22.286049 containerd[1484]: time="2025-05-08T00:41:22.285734908Z" level=info msg="TearDown network for sandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\" successfully" May 8 00:41:22.288367 containerd[1484]: time="2025-05-08T00:41:22.288345839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:22.288461 containerd[1484]: time="2025-05-08T00:41:22.288446339Z" level=info msg="RemovePodSandbox \"e563b99f9ec18a1cafc69f7bbfea2fceaee1fc6dae33dc5d3f738178148ec414\" returns successfully" May 8 00:41:23.487291 kubelet[2686]: I0508 00:41:23.486648 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:41:32.136743 kubelet[2686]: E0508 00:41:32.136152 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:36.136328 kubelet[2686]: E0508 00:41:36.135746 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:39.266912 systemd[1]: run-containerd-runc-k8s.io-7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42-runc.QM7DHo.mount: Deactivated successfully. May 8 00:41:46.137675 kubelet[2686]: E0508 00:41:46.136226 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:56.135875 kubelet[2686]: E0508 00:41:56.135527 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:58.573682 update_engine[1462]: I20250508 00:41:58.573618 1462 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 8 00:41:58.573682 update_engine[1462]: I20250508 00:41:58.573668 1462 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 8 00:41:58.574097 update_engine[1462]: I20250508 00:41:58.573910 1462 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 8 00:41:58.574940 update_engine[1462]: I20250508 00:41:58.574913 1462 omaha_request_params.cc:62] Current group set to beta May 8 00:41:58.576664 update_engine[1462]: I20250508 00:41:58.575579 1462 update_attempter.cc:499] Already updated boot flags. Skipping. May 8 00:41:58.576664 update_engine[1462]: I20250508 00:41:58.575598 1462 update_attempter.cc:643] Scheduling an action processor start. May 8 00:41:58.576664 update_engine[1462]: I20250508 00:41:58.575615 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:41:58.576664 update_engine[1462]: I20250508 00:41:58.575644 1462 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 8 00:41:58.576664 update_engine[1462]: I20250508 00:41:58.575706 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:41:58.576664 update_engine[1462]: I20250508 00:41:58.575714 1462 omaha_request_action.cc:272] Request: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: May 8 00:41:58.576664 update_engine[1462]: I20250508 00:41:58.575722 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:41:58.577265 locksmithd[1503]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 8 00:41:58.578606 update_engine[1462]: I20250508 00:41:58.578577 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:41:58.578952 update_engine[1462]: I20250508 00:41:58.578922 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:41:58.590695 update_engine[1462]: E20250508 00:41:58.590662 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:41:58.590746 update_engine[1462]: I20250508 00:41:58.590729 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 8 00:42:08.527605 update_engine[1462]: I20250508 00:42:08.526710 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:42:08.528085 update_engine[1462]: I20250508 00:42:08.527750 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:42:08.528085 update_engine[1462]: I20250508 00:42:08.528012 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:42:08.529068 update_engine[1462]: E20250508 00:42:08.529027 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:42:08.529127 update_engine[1462]: I20250508 00:42:08.529078 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 8 00:42:16.137032 kubelet[2686]: E0508 00:42:16.136330 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:17.135720 kubelet[2686]: E0508 00:42:17.135580 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:17.135720 kubelet[2686]: E0508 00:42:17.135663 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:18.524282 update_engine[1462]: I20250508 00:42:18.524191 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:42:18.524686 update_engine[1462]: I20250508 00:42:18.524544 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:42:18.524790 update_engine[1462]: I20250508 00:42:18.524755 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:42:18.525564 update_engine[1462]: E20250508 00:42:18.525534 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:42:18.525605 update_engine[1462]: I20250508 00:42:18.525585 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 8 00:42:23.008703 systemd[1]: run-containerd-runc-k8s.io-8dba6d46ddd6ec695817da835a2ca7c0b20ff38191ff394ad06b220f70ee3015-runc.UT5n7N.mount: Deactivated successfully. May 8 00:42:26.136687 kubelet[2686]: E0508 00:42:26.136343 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:27.368284 systemd[1]: Started sshd@7-172.237.145.97:22-139.178.89.65:49018.service - OpenSSH per-connection server daemon (139.178.89.65:49018). May 8 00:42:27.708854 sshd[5695]: Accepted publickey for core from 139.178.89.65 port 49018 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:27.711442 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:27.717616 systemd-logind[1461]: New session 8 of user core. May 8 00:42:27.723349 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:42:28.036535 sshd[5697]: Connection closed by 139.178.89.65 port 49018 May 8 00:42:28.038318 sshd-session[5695]: pam_unix(sshd:session): session closed for user core May 8 00:42:28.043933 systemd[1]: sshd@7-172.237.145.97:22-139.178.89.65:49018.service: Deactivated successfully. May 8 00:42:28.044421 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. May 8 00:42:28.047700 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:42:28.049521 systemd-logind[1461]: Removed session 8. May 8 00:42:28.517447 update_engine[1462]: I20250508 00:42:28.517340 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:42:28.517916 update_engine[1462]: I20250508 00:42:28.517858 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:42:28.518251 update_engine[1462]: I20250508 00:42:28.518177 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:42:28.519234 update_engine[1462]: E20250508 00:42:28.519177 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:42:28.519315 update_engine[1462]: I20250508 00:42:28.519284 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:42:28.519315 update_engine[1462]: I20250508 00:42:28.519302 1462 omaha_request_action.cc:617] Omaha request response: May 8 00:42:28.519444 update_engine[1462]: E20250508 00:42:28.519414 1462 omaha_request_action.cc:636] Omaha request network transfer failed. May 8 00:42:28.519474 update_engine[1462]: I20250508 00:42:28.519446 1462 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 8 00:42:28.519474 update_engine[1462]: I20250508 00:42:28.519455 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:42:28.519474 update_engine[1462]: I20250508 00:42:28.519462 1462 update_attempter.cc:306] Processing Done. May 8 00:42:28.519606 update_engine[1462]: E20250508 00:42:28.519484 1462 update_attempter.cc:619] Update failed. May 8 00:42:28.519606 update_engine[1462]: I20250508 00:42:28.519492 1462 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 8 00:42:28.519606 update_engine[1462]: I20250508 00:42:28.519498 1462 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 8 00:42:28.519606 update_engine[1462]: I20250508 00:42:28.519505 1462 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 8 00:42:28.519606 update_engine[1462]: I20250508 00:42:28.519582 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:42:28.519710 update_engine[1462]: I20250508 00:42:28.519613 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:42:28.519710 update_engine[1462]: I20250508 00:42:28.519620 1462 omaha_request_action.cc:272] Request: May 8 00:42:28.519710 update_engine[1462]: May 8 00:42:28.519710 update_engine[1462]: May 8 00:42:28.519710 update_engine[1462]: May 8 00:42:28.519710 update_engine[1462]: May 8 00:42:28.519710 update_engine[1462]: May 8 00:42:28.519710 update_engine[1462]: May 8 00:42:28.519710 update_engine[1462]: I20250508 00:42:28.519627 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:42:28.519871 update_engine[1462]: I20250508 00:42:28.519810 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:42:28.520090 update_engine[1462]: I20250508 00:42:28.520040 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:42:28.520617 locksmithd[1503]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 8 00:42:28.520865 update_engine[1462]: E20250508 00:42:28.520848 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:42:28.520921 update_engine[1462]: I20250508 00:42:28.520898 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:42:28.520921 update_engine[1462]: I20250508 00:42:28.520915 1462 omaha_request_action.cc:617] Omaha request response: May 8 00:42:28.520973 update_engine[1462]: I20250508 00:42:28.520922 1462 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:42:28.520973 update_engine[1462]: I20250508 00:42:28.520931 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:42:28.520973 update_engine[1462]: I20250508 00:42:28.520936 1462 update_attempter.cc:306] Processing Done. May 8 00:42:28.520973 update_engine[1462]: I20250508 00:42:28.520942 1462 update_attempter.cc:310] Error event sent. May 8 00:42:28.520973 update_engine[1462]: I20250508 00:42:28.520952 1462 update_check_scheduler.cc:74] Next update check in 46m2s May 8 00:42:28.521250 locksmithd[1503]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 8 00:42:33.105471 systemd[1]: Started sshd@8-172.237.145.97:22-139.178.89.65:49034.service - OpenSSH per-connection server daemon (139.178.89.65:49034). May 8 00:42:33.439247 sshd[5710]: Accepted publickey for core from 139.178.89.65 port 49034 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:33.441253 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:33.446264 systemd-logind[1461]: New session 9 of user core. May 8 00:42:33.450333 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:42:33.756075 sshd[5712]: Connection closed by 139.178.89.65 port 49034 May 8 00:42:33.757805 sshd-session[5710]: pam_unix(sshd:session): session closed for user core May 8 00:42:33.762979 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. May 8 00:42:33.763790 systemd[1]: sshd@8-172.237.145.97:22-139.178.89.65:49034.service: Deactivated successfully. May 8 00:42:33.766475 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:42:33.767987 systemd-logind[1461]: Removed session 9. May 8 00:42:38.826550 systemd[1]: Started sshd@9-172.237.145.97:22-139.178.89.65:50858.service - OpenSSH per-connection server daemon (139.178.89.65:50858). May 8 00:42:39.169037 sshd[5728]: Accepted publickey for core from 139.178.89.65 port 50858 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:39.170684 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:39.175947 systemd-logind[1461]: New session 10 of user core. May 8 00:42:39.180359 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:42:39.269484 systemd[1]: run-containerd-runc-k8s.io-7635c9802b82a333acf7e1b30f9dc3597259e7b405905bd4ca8c670c562f4b42-runc.vEk6oR.mount: Deactivated successfully. May 8 00:42:39.486673 sshd[5730]: Connection closed by 139.178.89.65 port 50858 May 8 00:42:39.488151 sshd-session[5728]: pam_unix(sshd:session): session closed for user core May 8 00:42:39.493021 systemd[1]: sshd@9-172.237.145.97:22-139.178.89.65:50858.service: Deactivated successfully. May 8 00:42:39.495550 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:42:39.496406 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. May 8 00:42:39.497661 systemd-logind[1461]: Removed session 10. May 8 00:42:39.548430 systemd[1]: Started sshd@10-172.237.145.97:22-139.178.89.65:50872.service - OpenSSH per-connection server daemon (139.178.89.65:50872). May 8 00:42:39.874416 sshd[5765]: Accepted publickey for core from 139.178.89.65 port 50872 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:39.874916 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:39.880066 systemd-logind[1461]: New session 11 of user core. May 8 00:42:39.883343 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:42:40.217802 sshd[5771]: Connection closed by 139.178.89.65 port 50872 May 8 00:42:40.218446 sshd-session[5765]: pam_unix(sshd:session): session closed for user core May 8 00:42:40.225954 systemd[1]: sshd@10-172.237.145.97:22-139.178.89.65:50872.service: Deactivated successfully. May 8 00:42:40.227933 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:42:40.229522 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. May 8 00:42:40.230756 systemd-logind[1461]: Removed session 11. May 8 00:42:40.285412 systemd[1]: Started sshd@11-172.237.145.97:22-139.178.89.65:50878.service - OpenSSH per-connection server daemon (139.178.89.65:50878). May 8 00:42:40.619324 sshd[5781]: Accepted publickey for core from 139.178.89.65 port 50878 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:40.620552 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:40.629570 systemd-logind[1461]: New session 12 of user core. May 8 00:42:40.633403 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:42:40.946746 sshd[5783]: Connection closed by 139.178.89.65 port 50878 May 8 00:42:40.947405 sshd-session[5781]: pam_unix(sshd:session): session closed for user core May 8 00:42:40.951348 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. May 8 00:42:40.953610 systemd[1]: sshd@11-172.237.145.97:22-139.178.89.65:50878.service: Deactivated successfully. May 8 00:42:40.956559 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:42:40.958619 systemd-logind[1461]: Removed session 12. May 8 00:42:46.012441 systemd[1]: Started sshd@12-172.237.145.97:22-139.178.89.65:50892.service - OpenSSH per-connection server daemon (139.178.89.65:50892). May 8 00:42:46.136170 kubelet[2686]: E0508 00:42:46.135773 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:46.345562 sshd[5795]: Accepted publickey for core from 139.178.89.65 port 50892 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:46.347028 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:46.352499 systemd-logind[1461]: New session 13 of user core. May 8 00:42:46.358334 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:42:46.667772 sshd[5797]: Connection closed by 139.178.89.65 port 50892 May 8 00:42:46.668389 sshd-session[5795]: pam_unix(sshd:session): session closed for user core May 8 00:42:46.671968 systemd[1]: sshd@12-172.237.145.97:22-139.178.89.65:50892.service: Deactivated successfully. May 8 00:42:46.674437 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:42:46.676579 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. May 8 00:42:46.678086 systemd-logind[1461]: Removed session 13. May 8 00:42:46.736426 systemd[1]: Started sshd@13-172.237.145.97:22-139.178.89.65:51984.service - OpenSSH per-connection server daemon (139.178.89.65:51984). May 8 00:42:47.078414 sshd[5809]: Accepted publickey for core from 139.178.89.65 port 51984 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:47.080339 sshd-session[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:47.085449 systemd-logind[1461]: New session 14 of user core. May 8 00:42:47.089340 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:42:47.516645 sshd[5811]: Connection closed by 139.178.89.65 port 51984 May 8 00:42:47.517602 sshd-session[5809]: pam_unix(sshd:session): session closed for user core May 8 00:42:47.521983 systemd[1]: sshd@13-172.237.145.97:22-139.178.89.65:51984.service: Deactivated successfully. May 8 00:42:47.524492 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:42:47.525352 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. May 8 00:42:47.526765 systemd-logind[1461]: Removed session 14. May 8 00:42:47.580411 systemd[1]: Started sshd@14-172.237.145.97:22-139.178.89.65:51994.service - OpenSSH per-connection server daemon (139.178.89.65:51994). May 8 00:42:47.908642 sshd[5821]: Accepted publickey for core from 139.178.89.65 port 51994 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:47.910374 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:47.915105 systemd-logind[1461]: New session 15 of user core. May 8 00:42:47.920357 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:42:48.137699 kubelet[2686]: E0508 00:42:48.137670 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:49.136075 kubelet[2686]: E0508 00:42:49.135997 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:49.561075 sshd[5823]: Connection closed by 139.178.89.65 port 51994 May 8 00:42:49.561521 sshd-session[5821]: pam_unix(sshd:session): session closed for user core May 8 00:42:49.566228 systemd[1]: sshd@14-172.237.145.97:22-139.178.89.65:51994.service: Deactivated successfully. May 8 00:42:49.569632 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:42:49.569912 systemd[1]: session-15.scope: Consumed 533ms CPU time, 73.1M memory peak. May 8 00:42:49.570536 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. May 8 00:42:49.571691 systemd-logind[1461]: Removed session 15. May 8 00:42:49.624424 systemd[1]: Started sshd@15-172.237.145.97:22-139.178.89.65:52010.service - OpenSSH per-connection server daemon (139.178.89.65:52010). May 8 00:42:49.951363 sshd[5840]: Accepted publickey for core from 139.178.89.65 port 52010 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:49.955767 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:49.960954 systemd-logind[1461]: New session 16 of user core. May 8 00:42:49.968350 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:42:50.351020 sshd[5842]: Connection closed by 139.178.89.65 port 52010 May 8 00:42:50.351836 sshd-session[5840]: pam_unix(sshd:session): session closed for user core May 8 00:42:50.356478 systemd[1]: sshd@15-172.237.145.97:22-139.178.89.65:52010.service: Deactivated successfully. May 8 00:42:50.359158 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:42:50.359950 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. May 8 00:42:50.360934 systemd-logind[1461]: Removed session 16. May 8 00:42:50.410916 systemd[1]: Started sshd@16-172.237.145.97:22-139.178.89.65:52024.service - OpenSSH per-connection server daemon (139.178.89.65:52024). May 8 00:42:50.735047 sshd[5852]: Accepted publickey for core from 139.178.89.65 port 52024 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:50.736606 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:50.741112 systemd-logind[1461]: New session 17 of user core. May 8 00:42:50.750322 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:42:51.036784 sshd[5854]: Connection closed by 139.178.89.65 port 52024 May 8 00:42:51.037667 sshd-session[5852]: pam_unix(sshd:session): session closed for user core May 8 00:42:51.042090 systemd[1]: sshd@16-172.237.145.97:22-139.178.89.65:52024.service: Deactivated successfully. May 8 00:42:51.044973 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:42:51.045741 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. May 8 00:42:51.046734 systemd-logind[1461]: Removed session 17. May 8 00:42:56.107444 systemd[1]: Started sshd@17-172.237.145.97:22-139.178.89.65:52038.service - OpenSSH per-connection server daemon (139.178.89.65:52038). May 8 00:42:56.438973 sshd[5907]: Accepted publickey for core from 139.178.89.65 port 52038 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:56.440897 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:56.445952 systemd-logind[1461]: New session 18 of user core. May 8 00:42:56.449328 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:42:56.735336 sshd[5909]: Connection closed by 139.178.89.65 port 52038 May 8 00:42:56.735794 sshd-session[5907]: pam_unix(sshd:session): session closed for user core May 8 00:42:56.741313 systemd[1]: sshd@17-172.237.145.97:22-139.178.89.65:52038.service: Deactivated successfully. May 8 00:42:56.745966 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:42:56.747092 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. May 8 00:42:56.749051 systemd-logind[1461]: Removed session 18. May 8 00:43:01.136374 kubelet[2686]: E0508 00:43:01.136003 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:43:01.804403 systemd[1]: Started sshd@18-172.237.145.97:22-139.178.89.65:58414.service - OpenSSH per-connection server daemon (139.178.89.65:58414). May 8 00:43:02.137892 sshd[5921]: Accepted publickey for core from 139.178.89.65 port 58414 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:02.139638 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:02.143781 systemd-logind[1461]: New session 19 of user core. May 8 00:43:02.149335 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:43:02.442887 sshd[5923]: Connection closed by 139.178.89.65 port 58414 May 8 00:43:02.443448 sshd-session[5921]: pam_unix(sshd:session): session closed for user core May 8 00:43:02.448183 systemd[1]: sshd@18-172.237.145.97:22-139.178.89.65:58414.service: Deactivated successfully. May 8 00:43:02.450615 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:43:02.451393 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. May 8 00:43:02.452575 systemd-logind[1461]: Removed session 19. May 8 00:43:07.510453 systemd[1]: Started sshd@19-172.237.145.97:22-139.178.89.65:56150.service - OpenSSH per-connection server daemon (139.178.89.65:56150). May 8 00:43:07.834484 sshd[5937]: Accepted publickey for core from 139.178.89.65 port 56150 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:07.836356 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:07.841113 systemd-logind[1461]: New session 20 of user core. May 8 00:43:07.846348 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:43:08.131502 sshd[5939]: Connection closed by 139.178.89.65 port 56150 May 8 00:43:08.132082 sshd-session[5937]: pam_unix(sshd:session): session closed for user core May 8 00:43:08.136004 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. May 8 00:43:08.138616 systemd[1]: sshd@19-172.237.145.97:22-139.178.89.65:56150.service: Deactivated successfully. May 8 00:43:08.140620 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:43:08.141974 systemd-logind[1461]: Removed session 20. May 8 00:43:13.200508 systemd[1]: Started sshd@20-172.237.145.97:22-139.178.89.65:56154.service - OpenSSH per-connection server daemon (139.178.89.65:56154). May 8 00:43:13.538008 sshd[5973]: Accepted publickey for core from 139.178.89.65 port 56154 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:13.539456 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:13.544533 systemd-logind[1461]: New session 21 of user core. May 8 00:43:13.550346 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:43:13.843553 sshd[5975]: Connection closed by 139.178.89.65 port 56154 May 8 00:43:13.844476 sshd-session[5973]: pam_unix(sshd:session): session closed for user core May 8 00:43:13.847707 systemd[1]: sshd@20-172.237.145.97:22-139.178.89.65:56154.service: Deactivated successfully. May 8 00:43:13.850093 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:43:13.852394 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. May 8 00:43:13.853944 systemd-logind[1461]: Removed session 21. May 8 00:43:18.912423 systemd[1]: Started sshd@21-172.237.145.97:22-139.178.89.65:35456.service - OpenSSH per-connection server daemon (139.178.89.65:35456). May 8 00:43:19.248550 sshd[5987]: Accepted publickey for core from 139.178.89.65 port 35456 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:19.250015 sshd-session[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:19.255094 systemd-logind[1461]: New session 22 of user core. May 8 00:43:19.264337 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:43:19.557768 sshd[5989]: Connection closed by 139.178.89.65 port 35456 May 8 00:43:19.558633 sshd-session[5987]: pam_unix(sshd:session): session closed for user core May 8 00:43:19.561938 systemd[1]: sshd@21-172.237.145.97:22-139.178.89.65:35456.service: Deactivated successfully. May 8 00:43:19.564632 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:43:19.566374 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit. May 8 00:43:19.568180 systemd-logind[1461]: Removed session 22. May 8 00:43:21.136164 kubelet[2686]: E0508 00:43:21.135987 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:43:21.136164 kubelet[2686]: E0508 00:43:21.136057 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:43:23.011117 systemd[1]: run-containerd-runc-k8s.io-8dba6d46ddd6ec695817da835a2ca7c0b20ff38191ff394ad06b220f70ee3015-runc.jVD5Rr.mount: Deactivated successfully. May 8 00:43:24.623436 systemd[1]: Started sshd@22-172.237.145.97:22-139.178.89.65:35472.service - OpenSSH per-connection server daemon (139.178.89.65:35472). May 8 00:43:24.946284 sshd[6022]: Accepted publickey for core from 139.178.89.65 port 35472 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:24.947689 sshd-session[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:24.955365 systemd-logind[1461]: New session 23 of user core. May 8 00:43:24.958375 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:43:25.245451 sshd[6024]: Connection closed by 139.178.89.65 port 35472 May 8 00:43:25.246174 sshd-session[6022]: pam_unix(sshd:session): session closed for user core May 8 00:43:25.250057 systemd-logind[1461]: Session 23 logged out. Waiting for processes to exit. May 8 00:43:25.251010 systemd[1]: sshd@22-172.237.145.97:22-139.178.89.65:35472.service: Deactivated successfully. May 8 00:43:25.253491 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:43:25.254623 systemd-logind[1461]: Removed session 23. May 8 00:43:28.136197 kubelet[2686]: E0508 00:43:28.135471 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18"