May 8 00:39:33.901551 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:39:33.901571 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:39:33.901580 kernel: BIOS-provided physical RAM map: May 8 00:39:33.901586 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 8 00:39:33.901592 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 8 00:39:33.901600 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:39:33.901607 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 8 00:39:33.901613 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 8 00:39:33.901619 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:39:33.901624 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:39:33.901630 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:39:33.901636 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:39:33.901642 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 8 00:39:33.901648 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:39:33.901657 kernel: NX (Execute Disable) protection: active May 8 00:39:33.901663 kernel: APIC: Static calls initialized May 8 00:39:33.901669 kernel: SMBIOS 2.8 present. May 8 00:39:33.901675 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 8 00:39:33.901681 kernel: Hypervisor detected: KVM May 8 00:39:33.901689 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:39:33.901695 kernel: kvm-clock: using sched offset of 4639025414 cycles May 8 00:39:33.901702 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:39:33.901708 kernel: tsc: Detected 1999.999 MHz processor May 8 00:39:33.901715 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:39:33.901722 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:39:33.901728 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 8 00:39:33.901735 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 00:39:33.901741 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:39:33.901750 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 8 00:39:33.901793 kernel: Using GB pages for direct mapping May 8 00:39:33.901799 kernel: ACPI: Early table checksum verification disabled May 8 00:39:33.901806 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 8 00:39:33.901813 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:33.901819 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:33.901826 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:33.901832 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 8 00:39:33.901839 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:33.901848 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:33.901854 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:33.901861 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:33.901870 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 8 00:39:33.901877 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 8 00:39:33.901884 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 8 00:39:33.901890 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 8 00:39:33.901899 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 8 00:39:33.901906 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 8 00:39:33.901912 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 8 00:39:33.901919 kernel: No NUMA configuration found May 8 00:39:33.901925 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 8 00:39:33.901932 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] May 8 00:39:33.901938 kernel: Zone ranges: May 8 00:39:33.901945 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:39:33.901954 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 00:39:33.901960 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 8 00:39:33.901967 kernel: Movable zone start for each node May 8 00:39:33.901973 kernel: Early memory node ranges May 8 00:39:33.901980 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:39:33.901986 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 8 00:39:33.901993 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 8 00:39:33.901999 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 8 00:39:33.902006 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:39:33.902015 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:39:33.902021 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 8 00:39:33.902028 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:39:33.902035 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:39:33.902041 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:39:33.902048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:39:33.902054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:39:33.902061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:39:33.902067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:39:33.902076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:39:33.902083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:39:33.902089 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:39:33.902096 kernel: TSC deadline timer available May 8 00:39:33.902102 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 00:39:33.902109 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:39:33.902115 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:39:33.902122 kernel: kvm-guest: setup PV sched yield May 8 00:39:33.902128 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:39:33.902137 kernel: Booting paravirtualized kernel on KVM May 8 00:39:33.902144 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:39:33.902151 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 00:39:33.902157 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 00:39:33.902164 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 00:39:33.902170 kernel: pcpu-alloc: [0] 0 1 May 8 00:39:33.902177 kernel: kvm-guest: PV spinlocks enabled May 8 00:39:33.902184 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:39:33.902191 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:39:33.902201 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:39:33.902207 kernel: random: crng init done May 8 00:39:33.902214 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:39:33.902220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:39:33.902227 kernel: Fallback order for Node 0: 0 May 8 00:39:33.902233 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 8 00:39:33.902240 kernel: Policy zone: Normal May 8 00:39:33.902246 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:39:33.902255 kernel: software IO TLB: area num 2. May 8 00:39:33.902262 kernel: Memory: 3964164K/4193772K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 229348K reserved, 0K cma-reserved) May 8 00:39:33.902269 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 00:39:33.902275 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:39:33.902282 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:39:33.902288 kernel: Dynamic Preempt: voluntary May 8 00:39:33.902295 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:39:33.902302 kernel: rcu: RCU event tracing is enabled. May 8 00:39:33.902309 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 00:39:33.902318 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:39:33.902325 kernel: Rude variant of Tasks RCU enabled. May 8 00:39:33.902331 kernel: Tracing variant of Tasks RCU enabled. May 8 00:39:33.902338 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:39:33.902344 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 00:39:33.902351 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 00:39:33.902358 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:39:33.902364 kernel: Console: colour VGA+ 80x25 May 8 00:39:33.902371 kernel: printk: console [tty0] enabled May 8 00:39:33.902377 kernel: printk: console [ttyS0] enabled May 8 00:39:33.902386 kernel: ACPI: Core revision 20230628 May 8 00:39:33.902393 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:39:33.902399 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:39:33.902413 kernel: x2apic enabled May 8 00:39:33.902422 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:39:33.902429 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:39:33.902436 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:39:33.902443 kernel: kvm-guest: setup PV IPIs May 8 00:39:33.902450 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:39:33.902457 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:39:33.902464 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 8 00:39:33.902473 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:39:33.902480 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:39:33.902487 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:39:33.902494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:39:33.902501 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:39:33.902510 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:39:33.902517 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:39:33.902524 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 8 00:39:33.902531 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:39:33.902538 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:39:33.902545 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:39:33.902552 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:39:33.902559 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:39:33.902568 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:39:33.902575 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:39:33.902582 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:39:33.902589 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 8 00:39:33.902596 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:39:33.902603 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 8 00:39:33.902610 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 8 00:39:33.902616 kernel: Freeing SMP alternatives memory: 32K May 8 00:39:33.902623 kernel: pid_max: default: 32768 minimum: 301 May 8 00:39:33.902632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:39:33.902639 kernel: landlock: Up and running. May 8 00:39:33.902646 kernel: SELinux: Initializing. May 8 00:39:33.902653 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:33.902660 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:33.902667 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 8 00:39:33.902674 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:39:33.902681 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:39:33.902688 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:39:33.902697 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:39:33.902704 kernel: ... version: 0 May 8 00:39:33.902711 kernel: ... bit width: 48 May 8 00:39:33.902717 kernel: ... generic registers: 6 May 8 00:39:33.902724 kernel: ... value mask: 0000ffffffffffff May 8 00:39:33.902731 kernel: ... max period: 00007fffffffffff May 8 00:39:33.902738 kernel: ... fixed-purpose events: 0 May 8 00:39:33.902745 kernel: ... event mask: 000000000000003f May 8 00:39:33.902751 kernel: signal: max sigframe size: 3376 May 8 00:39:33.902773 kernel: rcu: Hierarchical SRCU implementation. May 8 00:39:33.902792 kernel: rcu: Max phase no-delay instances is 400. May 8 00:39:33.902798 kernel: smp: Bringing up secondary CPUs ... May 8 00:39:33.902805 kernel: smpboot: x86: Booting SMP configuration: May 8 00:39:33.902812 kernel: .... node #0, CPUs: #1 May 8 00:39:33.902819 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:39:33.902826 kernel: smpboot: Max logical packages: 1 May 8 00:39:33.902832 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 8 00:39:33.902839 kernel: devtmpfs: initialized May 8 00:39:33.902846 kernel: x86/mm: Memory block size: 128MB May 8 00:39:33.902856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:39:33.902862 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 00:39:33.902869 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:39:33.902876 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:39:33.902883 kernel: audit: initializing netlink subsys (disabled) May 8 00:39:33.902890 kernel: audit: type=2000 audit(1746664773.035:1): state=initialized audit_enabled=0 res=1 May 8 00:39:33.902896 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:39:33.902903 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:39:33.902912 kernel: cpuidle: using governor menu May 8 00:39:33.902919 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:39:33.902926 kernel: dca service started, version 1.12.1 May 8 00:39:33.902933 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:39:33.902940 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:39:33.902947 kernel: PCI: Using configuration type 1 for base access May 8 00:39:33.902954 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:39:33.902961 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:39:33.902967 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:39:33.902977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:39:33.902984 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:39:33.902990 kernel: ACPI: Added _OSI(Module Device) May 8 00:39:33.902997 kernel: ACPI: Added _OSI(Processor Device) May 8 00:39:33.903004 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:39:33.903011 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:39:33.903017 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:39:33.903024 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:39:33.903031 kernel: ACPI: Interpreter enabled May 8 00:39:33.903040 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:39:33.903046 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:39:33.903053 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:39:33.903060 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:39:33.903067 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:39:33.903074 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:39:33.903248 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:39:33.903377 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:39:33.903502 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:39:33.903512 kernel: PCI host bridge to bus 0000:00 May 8 00:39:33.903636 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:39:33.903745 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:39:33.903875 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:39:33.903981 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 8 00:39:33.904086 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:39:33.904198 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 8 00:39:33.904305 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:39:33.904443 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:39:33.904569 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:39:33.904684 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:39:33.904814 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:39:33.904934 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:39:33.905048 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:39:33.905174 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 8 00:39:33.905302 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 8 00:39:33.905427 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:39:33.905545 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:39:33.905671 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 00:39:33.905834 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 8 00:39:33.905952 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:39:33.906066 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:39:33.906179 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:39:33.906333 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:39:33.906458 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:39:33.906583 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:39:33.906703 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 8 00:39:33.906846 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 8 00:39:33.906973 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:39:33.907087 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:39:33.907096 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:39:33.907104 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:39:33.907111 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:39:33.907121 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:39:33.907128 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:39:33.907135 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:39:33.907142 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:39:33.907149 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:39:33.907156 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:39:33.907163 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:39:33.907170 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:39:33.907177 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:39:33.907185 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:39:33.907192 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:39:33.907199 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:39:33.907206 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:39:33.907213 kernel: iommu: Default domain type: Translated May 8 00:39:33.907219 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:39:33.907226 kernel: PCI: Using ACPI for IRQ routing May 8 00:39:33.907233 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:39:33.907240 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 8 00:39:33.907249 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 8 00:39:33.907360 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:39:33.907473 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:39:33.907585 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:39:33.907595 kernel: vgaarb: loaded May 8 00:39:33.907602 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:39:33.907609 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:39:33.907616 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:39:33.907622 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:39:33.907633 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:39:33.907640 kernel: pnp: PnP ACPI init May 8 00:39:33.907843 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:39:33.907856 kernel: pnp: PnP ACPI: found 5 devices May 8 00:39:33.907863 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:39:33.907870 kernel: NET: Registered PF_INET protocol family May 8 00:39:33.907878 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:39:33.907885 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:39:33.907896 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:39:33.907902 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:39:33.907909 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:39:33.907916 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:39:33.907923 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:33.907930 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:33.907937 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:39:33.907944 kernel: NET: Registered PF_XDP protocol family May 8 00:39:33.908051 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:39:33.908158 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:39:33.908260 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:39:33.908363 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 8 00:39:33.908467 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:39:33.908570 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 8 00:39:33.908579 kernel: PCI: CLS 0 bytes, default 64 May 8 00:39:33.908586 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 00:39:33.908593 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 8 00:39:33.908604 kernel: Initialise system trusted keyrings May 8 00:39:33.908611 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:39:33.908618 kernel: Key type asymmetric registered May 8 00:39:33.908625 kernel: Asymmetric key parser 'x509' registered May 8 00:39:33.908632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:39:33.908639 kernel: io scheduler mq-deadline registered May 8 00:39:33.908645 kernel: io scheduler kyber registered May 8 00:39:33.908652 kernel: io scheduler bfq registered May 8 00:39:33.908659 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:39:33.908669 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:39:33.908676 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:39:33.908682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:39:33.908690 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:39:33.908696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:39:33.908703 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:39:33.908710 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:39:33.908852 kernel: rtc_cmos 00:03: RTC can wake from S4 May 8 00:39:33.908867 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:39:33.908974 kernel: rtc_cmos 00:03: registered as rtc0 May 8 00:39:33.909099 kernel: rtc_cmos 00:03: setting system clock to 2025-05-08T00:39:33 UTC (1746664773) May 8 00:39:33.909206 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:39:33.909215 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:39:33.909222 kernel: NET: Registered PF_INET6 protocol family May 8 00:39:33.909229 kernel: Segment Routing with IPv6 May 8 00:39:33.909236 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:39:33.909243 kernel: NET: Registered PF_PACKET protocol family May 8 00:39:33.909253 kernel: Key type dns_resolver registered May 8 00:39:33.909260 kernel: IPI shorthand broadcast: enabled May 8 00:39:33.909267 kernel: sched_clock: Marking stable (693003486, 206427008)->(956672038, -57241544) May 8 00:39:33.909274 kernel: registered taskstats version 1 May 8 00:39:33.909281 kernel: Loading compiled-in X.509 certificates May 8 00:39:33.909288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:39:33.909295 kernel: Key type .fscrypt registered May 8 00:39:33.909301 kernel: Key type fscrypt-provisioning registered May 8 00:39:33.909309 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:39:33.909318 kernel: ima: Allocated hash algorithm: sha1 May 8 00:39:33.909324 kernel: ima: No architecture policies found May 8 00:39:33.909331 kernel: clk: Disabling unused clocks May 8 00:39:33.909338 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:39:33.909345 kernel: Write protecting the kernel read-only data: 38912k May 8 00:39:33.909352 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:39:33.909359 kernel: Run /init as init process May 8 00:39:33.909365 kernel: with arguments: May 8 00:39:33.909372 kernel: /init May 8 00:39:33.909382 kernel: with environment: May 8 00:39:33.909388 kernel: HOME=/ May 8 00:39:33.909395 kernel: TERM=linux May 8 00:39:33.909402 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:39:33.909409 systemd[1]: Successfully made /usr/ read-only. May 8 00:39:33.909419 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:39:33.909427 systemd[1]: Detected virtualization kvm. May 8 00:39:33.909436 systemd[1]: Detected architecture x86-64. May 8 00:39:33.909444 systemd[1]: Running in initrd. May 8 00:39:33.909451 systemd[1]: No hostname configured, using default hostname. May 8 00:39:33.909458 systemd[1]: Hostname set to . May 8 00:39:33.909465 systemd[1]: Initializing machine ID from random generator. May 8 00:39:33.909486 systemd[1]: Queued start job for default target initrd.target. May 8 00:39:33.909497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:33.909505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:33.909513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:39:33.909521 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:33.909528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:39:33.909536 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:39:33.909545 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:39:33.909554 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:39:33.909562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:33.909570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:33.909577 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:33.909585 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:33.909592 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:33.909600 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:33.909607 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:33.909615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:33.909625 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:39:33.909633 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:39:33.909640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:33.909648 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:33.909655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:33.909662 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:33.909670 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:39:33.909677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:33.909687 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:39:33.909694 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:39:33.909702 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:33.909709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:33.909717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:33.909743 systemd-journald[177]: Collecting audit messages is disabled. May 8 00:39:33.909792 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:39:33.909800 systemd-journald[177]: Journal started May 8 00:39:33.909821 systemd-journald[177]: Runtime Journal (/run/log/journal/aee79b8d409a4e2ea69ca57aa7f53ed2) is 8M, max 78.3M, 70.3M free. May 8 00:39:33.912925 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:33.917503 systemd-modules-load[179]: Inserted module 'overlay' May 8 00:39:33.920477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:33.921210 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:39:33.937906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:33.987100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:39:33.987127 kernel: Bridge firewalling registered May 8 00:39:33.947928 systemd-modules-load[179]: Inserted module 'br_netfilter' May 8 00:39:34.000893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:34.002875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:34.004425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:34.005925 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:34.012890 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:34.014887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:34.018875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:34.037715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:34.049959 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:34.053707 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:39:34.062172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:34.063585 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:34.071128 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:34.073445 dracut-cmdline[209]: dracut-dracut-053 May 8 00:39:34.075719 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:39:34.106206 systemd-resolved[217]: Positive Trust Anchors: May 8 00:39:34.106886 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:34.106914 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:34.112090 systemd-resolved[217]: Defaulting to hostname 'linux'. May 8 00:39:34.114135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:34.114810 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:34.147780 kernel: SCSI subsystem initialized May 8 00:39:34.156775 kernel: Loading iSCSI transport class v2.0-870. May 8 00:39:34.166778 kernel: iscsi: registered transport (tcp) May 8 00:39:34.187081 kernel: iscsi: registered transport (qla4xxx) May 8 00:39:34.187142 kernel: QLogic iSCSI HBA Driver May 8 00:39:34.233517 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:39:34.238913 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:39:34.264389 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:39:34.264431 kernel: device-mapper: uevent: version 1.0.3 May 8 00:39:34.265143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:39:34.307785 kernel: raid6: avx2x4 gen() 33711 MB/s May 8 00:39:34.325778 kernel: raid6: avx2x2 gen() 31872 MB/s May 8 00:39:34.344300 kernel: raid6: avx2x1 gen() 23128 MB/s May 8 00:39:34.344322 kernel: raid6: using algorithm avx2x4 gen() 33711 MB/s May 8 00:39:34.363378 kernel: raid6: .... xor() 4876 MB/s, rmw enabled May 8 00:39:34.363412 kernel: raid6: using avx2x2 recovery algorithm May 8 00:39:34.382793 kernel: xor: automatically using best checksumming function avx May 8 00:39:34.506791 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:39:34.519707 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:34.524921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:34.540407 systemd-udevd[398]: Using default interface naming scheme 'v255'. May 8 00:39:34.545235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:34.552918 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:39:34.567816 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 8 00:39:34.599169 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:34.604913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:34.663043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:34.669969 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:39:34.692163 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:39:34.694684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:34.696839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:34.698237 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:34.703899 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:39:34.722001 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:34.733794 kernel: scsi host0: Virtio SCSI HBA May 8 00:39:34.815801 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:39:34.838797 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 8 00:39:34.854787 kernel: libata version 3.00 loaded. May 8 00:39:34.879689 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:39:34.879721 kernel: AES CTR mode by8 optimization enabled May 8 00:39:34.882778 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:39:34.924923 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:39:34.924940 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:39:34.925095 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:39:34.925240 kernel: scsi host1: ahci May 8 00:39:34.925393 kernel: scsi host2: ahci May 8 00:39:34.927879 kernel: scsi host3: ahci May 8 00:39:34.928035 kernel: scsi host4: ahci May 8 00:39:34.928182 kernel: scsi host5: ahci May 8 00:39:34.928334 kernel: scsi host6: ahci May 8 00:39:34.928477 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 8 00:39:34.928492 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 8 00:39:34.928501 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 8 00:39:34.928511 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 8 00:39:34.928520 kernel: sd 0:0:0:0: Power-on or device reset occurred May 8 00:39:34.941461 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 8 00:39:34.941476 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 8 00:39:34.941630 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 8 00:39:34.941642 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:39:34.943823 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 8 00:39:34.943970 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 8 00:39:34.944109 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:39:34.944119 kernel: GPT:9289727 != 167739391 May 8 00:39:34.944129 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:39:34.944138 kernel: GPT:9289727 != 167739391 May 8 00:39:34.944147 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:39:34.944156 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:34.944169 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:39:34.889256 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:34.889372 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:34.891086 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:34.891832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:34.892054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:34.893169 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:34.900045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:34.905275 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:39:34.989983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:35.000906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:35.041436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:35.231525 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:39:35.231572 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:39:35.231584 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 8 00:39:35.231594 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:39:35.231612 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:39:35.231622 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:39:35.276796 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (459) May 8 00:39:35.282787 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (457) May 8 00:39:35.295641 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 8 00:39:35.304815 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 8 00:39:35.312519 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 8 00:39:35.313143 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 8 00:39:35.322115 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:39:35.327879 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:39:35.333182 disk-uuid[571]: Primary Header is updated. May 8 00:39:35.333182 disk-uuid[571]: Secondary Entries is updated. May 8 00:39:35.333182 disk-uuid[571]: Secondary Header is updated. May 8 00:39:35.338795 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:35.344830 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:36.347918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:39:36.349438 disk-uuid[572]: The operation has completed successfully. May 8 00:39:36.400740 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:39:36.400883 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:39:36.437880 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:39:36.441138 sh[586]: Success May 8 00:39:36.454783 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:39:36.505388 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:39:36.512854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:39:36.514139 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:39:36.541090 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:39:36.541121 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:36.541133 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:39:36.543323 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:39:36.544892 kernel: BTRFS info (device dm-0): using free space tree May 8 00:39:36.552781 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:39:36.554326 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:39:36.555327 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:39:36.566907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:39:36.568780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:39:36.589791 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:36.589816 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:36.592536 kernel: BTRFS info (device sda6): using free space tree May 8 00:39:36.598701 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:39:36.598730 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:39:36.604793 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:36.606059 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:39:36.612932 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:39:36.694663 ignition[690]: Ignition 2.20.0 May 8 00:39:36.695518 ignition[690]: Stage: fetch-offline May 8 00:39:36.695559 ignition[690]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:36.695570 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:36.697218 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:36.695649 ignition[690]: parsed url from cmdline: "" May 8 00:39:36.699015 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:36.695653 ignition[690]: no config URL provided May 8 00:39:36.695658 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:39:36.695667 ignition[690]: no config at "/usr/lib/ignition/user.ign" May 8 00:39:36.695672 ignition[690]: failed to fetch config: resource requires networking May 8 00:39:36.695942 ignition[690]: Ignition finished successfully May 8 00:39:36.707951 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:36.733024 systemd-networkd[772]: lo: Link UP May 8 00:39:36.733035 systemd-networkd[772]: lo: Gained carrier May 8 00:39:36.734704 systemd-networkd[772]: Enumeration completed May 8 00:39:36.734847 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:36.735453 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:36.735457 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:36.736366 systemd[1]: Reached target network.target - Network. May 8 00:39:36.737162 systemd-networkd[772]: eth0: Link UP May 8 00:39:36.737166 systemd-networkd[772]: eth0: Gained carrier May 8 00:39:36.737173 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:36.742883 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 00:39:36.757045 ignition[775]: Ignition 2.20.0 May 8 00:39:36.757058 ignition[775]: Stage: fetch May 8 00:39:36.757197 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:36.757208 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:36.757292 ignition[775]: parsed url from cmdline: "" May 8 00:39:36.757296 ignition[775]: no config URL provided May 8 00:39:36.757301 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:39:36.757310 ignition[775]: no config at "/usr/lib/ignition/user.ign" May 8 00:39:36.757332 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #1 May 8 00:39:36.757509 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:39:36.958466 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #2 May 8 00:39:36.958629 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:39:37.239840 systemd-networkd[772]: eth0: DHCPv4 address 172.237.145.87/24, gateway 172.237.145.1 acquired from 23.215.119.0 May 8 00:39:37.358909 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #3 May 8 00:39:37.450514 ignition[775]: PUT result: OK May 8 00:39:37.450563 ignition[775]: GET http://169.254.169.254/v1/user-data: attempt #1 May 8 00:39:37.567885 ignition[775]: GET result: OK May 8 00:39:37.567964 ignition[775]: parsing config with SHA512: b530026fb66758bb38fa73998843d138f112355adfa16dff79f0266d48af331e85770edad17a60719a8bccdf54100d55395b592ead540220bb115eb503f50452 May 8 00:39:37.573308 unknown[775]: fetched base config from "system" May 8 00:39:37.573317 unknown[775]: fetched base config from "system" May 8 00:39:37.573597 ignition[775]: fetch: fetch complete May 8 00:39:37.573323 unknown[775]: fetched user config from "akamai" May 8 00:39:37.573602 ignition[775]: fetch: fetch passed May 8 00:39:37.573637 ignition[775]: Ignition finished successfully May 8 00:39:37.576507 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 00:39:37.581921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:39:37.594152 ignition[782]: Ignition 2.20.0 May 8 00:39:37.594161 ignition[782]: Stage: kargs May 8 00:39:37.594288 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:37.594298 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:37.595609 ignition[782]: kargs: kargs passed May 8 00:39:37.597057 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:39:37.595647 ignition[782]: Ignition finished successfully May 8 00:39:37.615874 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:39:37.625042 ignition[788]: Ignition 2.20.0 May 8 00:39:37.625571 ignition[788]: Stage: disks May 8 00:39:37.625731 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:37.625743 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:37.627999 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:39:37.626534 ignition[788]: disks: disks passed May 8 00:39:37.650210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:39:37.626571 ignition[788]: Ignition finished successfully May 8 00:39:37.651054 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:39:37.652028 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:37.653148 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:37.654144 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:37.660865 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:39:37.674695 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:39:37.676501 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:39:37.681843 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:39:37.754776 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:39:37.755651 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:39:37.756622 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:39:37.762822 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:37.765148 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:39:37.765893 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:39:37.765933 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:39:37.765956 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:37.772804 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:39:37.777779 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (804) May 8 00:39:37.778869 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:39:37.788263 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:37.788279 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:37.788290 kernel: BTRFS info (device sda6): using free space tree May 8 00:39:37.788300 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:39:37.788309 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:39:37.790832 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:37.829897 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:39:37.835018 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 8 00:39:37.839795 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:39:37.843179 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:39:37.921046 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:39:37.925836 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:39:37.929662 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:39:37.934457 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:39:37.937401 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:37.953873 ignition[916]: INFO : Ignition 2.20.0 May 8 00:39:37.954609 ignition[916]: INFO : Stage: mount May 8 00:39:37.955655 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:37.955655 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:37.957144 ignition[916]: INFO : mount: mount passed May 8 00:39:37.957144 ignition[916]: INFO : Ignition finished successfully May 8 00:39:37.958292 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:39:37.959119 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:39:37.964855 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:39:38.025884 systemd-networkd[772]: eth0: Gained IPv6LL May 8 00:39:38.760908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:38.773782 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (929) May 8 00:39:38.773848 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:39:38.776212 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:38.777976 kernel: BTRFS info (device sda6): using free space tree May 8 00:39:38.782836 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:39:38.782857 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:39:38.786542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:38.806968 ignition[946]: INFO : Ignition 2.20.0 May 8 00:39:38.806968 ignition[946]: INFO : Stage: files May 8 00:39:38.808294 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:38.808294 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:38.808294 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 8 00:39:38.810486 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:39:38.810486 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:39:38.812200 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:39:38.812200 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:39:38.812200 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:39:38.811319 unknown[946]: wrote ssh authorized keys file for user: core May 8 00:39:38.815246 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:39:38.815246 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:39:39.117130 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:39:39.535741 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:39:39.535741 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:39:39.537673 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:39:39.832946 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:39:40.106992 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:39:40.106992 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:39:40.109328 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:40.109328 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:40.109328 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:39:40.109328 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:39:40.109328 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:39:40.109328 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:39:40.109328 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:39:40.109328 ignition[946]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 8 00:39:40.109328 ignition[946]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:39:40.109328 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:40.109328 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:40.109328 ignition[946]: INFO : files: files passed May 8 00:39:40.109328 ignition[946]: INFO : Ignition finished successfully May 8 00:39:40.111853 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:39:40.141488 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:39:40.144821 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:39:40.146532 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:39:40.147267 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:39:40.157003 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:40.157003 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:40.159831 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:40.161624 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:40.162451 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:39:40.167922 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:39:40.190974 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:39:40.191093 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:39:40.192540 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:39:40.193370 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:39:40.194590 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:39:40.198897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:39:40.210788 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:40.217947 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:39:40.227284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:40.228005 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:40.230125 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:39:40.230707 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:39:40.230839 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:40.232269 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:39:40.233022 systemd[1]: Stopped target basic.target - Basic System. May 8 00:39:40.234180 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:39:40.235260 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:40.236338 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:39:40.237549 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:39:40.238703 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:40.239937 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:39:40.241116 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:39:40.242320 systemd[1]: Stopped target swap.target - Swaps. May 8 00:39:40.243457 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:39:40.243559 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:40.244807 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:40.245551 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:40.246597 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:39:40.246799 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:40.247784 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:39:40.247882 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:39:40.249448 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:39:40.249554 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:40.250228 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:39:40.250320 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:39:40.267921 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:39:40.270908 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:39:40.271474 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:39:40.271590 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:40.273199 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:39:40.273297 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:40.285366 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:39:40.285473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:39:40.288459 ignition[999]: INFO : Ignition 2.20.0 May 8 00:39:40.288459 ignition[999]: INFO : Stage: umount May 8 00:39:40.288459 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:40.288459 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:39:40.288459 ignition[999]: INFO : umount: umount passed May 8 00:39:40.288459 ignition[999]: INFO : Ignition finished successfully May 8 00:39:40.289880 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:39:40.289990 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:39:40.291854 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:39:40.291941 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:39:40.294512 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:39:40.294563 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:39:40.295701 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 00:39:40.295749 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 00:39:40.297119 systemd[1]: Stopped target network.target - Network. May 8 00:39:40.299123 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:39:40.299178 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:40.299812 systemd[1]: Stopped target paths.target - Path Units. May 8 00:39:40.300272 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:39:40.303795 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:40.304844 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:39:40.305530 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:39:40.329278 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:39:40.329322 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:40.330016 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:39:40.330058 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:40.330585 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:39:40.330637 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:39:40.334329 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:39:40.334385 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:39:40.338047 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:39:40.339566 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:39:40.342704 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:39:40.345468 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:39:40.345579 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:39:40.348301 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:39:40.348418 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:39:40.352633 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:39:40.352982 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:39:40.353098 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:39:40.354735 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:39:40.356354 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:39:40.356406 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:40.357493 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:39:40.357547 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:39:40.367129 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:39:40.367640 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:39:40.367695 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:40.368311 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:39:40.368359 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:40.369258 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:39:40.369317 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:39:40.370047 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:39:40.370093 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:40.371735 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:40.376203 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:39:40.376273 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:39:40.383568 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:39:40.383682 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:39:40.392129 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:39:40.392318 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:40.394042 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:39:40.394118 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:39:40.395332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:39:40.395370 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:40.396505 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:39:40.396554 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:40.398210 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:39:40.398259 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:39:40.399318 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:40.399368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:40.411901 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:39:40.412447 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:39:40.412500 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:40.413197 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:39:40.413245 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:40.414566 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:39:40.414615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:40.417152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:40.417202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:40.418860 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:39:40.418921 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:39:40.419346 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:39:40.419454 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:39:40.420647 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:39:40.426900 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:39:40.433886 systemd[1]: Switching root. May 8 00:39:40.475377 systemd-journald[177]: Journal stopped May 8 00:39:41.507233 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). May 8 00:39:41.507257 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:39:41.507269 kernel: SELinux: policy capability open_perms=1 May 8 00:39:41.507278 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:39:41.507287 kernel: SELinux: policy capability always_check_network=0 May 8 00:39:41.507299 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:39:41.507309 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:39:41.507318 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:39:41.507328 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:39:41.507338 kernel: audit: type=1403 audit(1746664780.583:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:39:41.507348 systemd[1]: Successfully loaded SELinux policy in 44.337ms. May 8 00:39:41.507361 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.529ms. May 8 00:39:41.507372 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:39:41.507382 systemd[1]: Detected virtualization kvm. May 8 00:39:41.507392 systemd[1]: Detected architecture x86-64. May 8 00:39:41.507402 systemd[1]: Detected first boot. May 8 00:39:41.507415 systemd[1]: Initializing machine ID from random generator. May 8 00:39:41.507425 zram_generator::config[1044]: No configuration found. May 8 00:39:41.507438 kernel: Guest personality initialized and is inactive May 8 00:39:41.507448 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:39:41.507457 kernel: Initialized host personality May 8 00:39:41.507466 kernel: NET: Registered PF_VSOCK protocol family May 8 00:39:41.507476 systemd[1]: Populated /etc with preset unit settings. May 8 00:39:41.507488 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:39:41.507498 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:39:41.507508 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:39:41.507518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:39:41.507528 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:39:41.507538 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:39:41.507549 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:39:41.507561 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:39:41.507572 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:39:41.507582 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:39:41.507592 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:39:41.507602 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:39:41.507612 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:41.507622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:41.507633 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:39:41.507643 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:39:41.507660 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:39:41.507673 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:41.507684 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:39:41.507694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:41.507704 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:39:41.507720 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:39:41.507730 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:39:41.507743 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:39:41.507764 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:41.507775 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:41.507785 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:41.507797 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:41.507807 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:39:41.507817 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:39:41.507827 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:39:41.507838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:41.507851 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:41.507862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:41.507872 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:39:41.507882 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:39:41.507895 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:39:41.507905 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:39:41.507916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:41.507926 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:39:41.507936 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:39:41.507946 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:39:41.507957 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:39:41.507967 systemd[1]: Reached target machines.target - Containers. May 8 00:39:41.507980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:39:41.507990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:41.508000 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:41.508010 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:39:41.508021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:41.508032 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:41.508042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:41.508052 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:39:41.508062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:41.508075 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:39:41.508086 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:39:41.508096 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:39:41.508106 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:39:41.508116 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:39:41.508127 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:39:41.508137 kernel: fuse: init (API version 7.39) May 8 00:39:41.508150 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:41.508160 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:41.508171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:39:41.508181 kernel: ACPI: bus type drm_connector registered May 8 00:39:41.508190 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:39:41.508201 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:39:41.508210 kernel: loop: module loaded May 8 00:39:41.508220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:41.508230 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:39:41.508244 systemd[1]: Stopped verity-setup.service. May 8 00:39:41.508271 systemd-journald[1135]: Collecting audit messages is disabled. May 8 00:39:41.508291 systemd-journald[1135]: Journal started May 8 00:39:41.508314 systemd-journald[1135]: Runtime Journal (/run/log/journal/b2d57268d8dc45489af8a7d89ed4b758) is 8M, max 78.3M, 70.3M free. May 8 00:39:41.181265 systemd[1]: Queued start job for default target multi-user.target. May 8 00:39:41.191507 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:39:41.192121 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:39:41.526789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:41.532779 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:41.533274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:39:41.533992 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:39:41.534745 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:39:41.535422 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:39:41.536125 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:39:41.536842 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:39:41.537687 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:39:41.538645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:41.539605 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:39:41.539882 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:39:41.540891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:41.541162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:41.542079 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:41.542356 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:41.543355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:41.543636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:41.544645 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:39:41.545047 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:39:41.546009 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:41.546288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:41.547340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:41.548333 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:39:41.549324 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:39:41.550264 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:39:41.567282 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:39:41.573501 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:39:41.577826 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:39:41.579802 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:39:41.579893 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:41.581817 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:39:41.591247 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:39:41.596842 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:39:41.597489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:41.603435 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:39:41.605470 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:39:41.606220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:41.609852 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:39:41.610430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:41.611900 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:41.614940 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:39:41.617910 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:41.621118 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:39:41.623051 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:39:41.625106 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:39:41.645622 systemd-journald[1135]: Time spent on flushing to /var/log/journal/b2d57268d8dc45489af8a7d89ed4b758 is 73.556ms for 991 entries. May 8 00:39:41.645622 systemd-journald[1135]: System Journal (/var/log/journal/b2d57268d8dc45489af8a7d89ed4b758) is 8M, max 195.6M, 187.6M free. May 8 00:39:41.745812 systemd-journald[1135]: Received client request to flush runtime journal. May 8 00:39:41.745860 kernel: loop0: detected capacity change from 0 to 138176 May 8 00:39:41.661145 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:39:41.661948 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:39:41.671868 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:39:41.712464 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:39:41.723644 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:41.729966 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:41.737930 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:39:41.748981 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 8 00:39:41.748995 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 8 00:39:41.756493 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:39:41.759358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:41.770280 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:39:41.772150 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:39:41.785909 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:39:41.799920 kernel: loop1: detected capacity change from 0 to 218376 May 8 00:39:41.842250 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:39:41.847779 kernel: loop2: detected capacity change from 0 to 8 May 8 00:39:41.850985 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:41.876108 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 8 00:39:41.876378 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 8 00:39:41.881138 kernel: loop3: detected capacity change from 0 to 147912 May 8 00:39:41.881771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:41.929803 kernel: loop4: detected capacity change from 0 to 138176 May 8 00:39:41.953803 kernel: loop5: detected capacity change from 0 to 218376 May 8 00:39:41.978052 kernel: loop6: detected capacity change from 0 to 8 May 8 00:39:41.982009 kernel: loop7: detected capacity change from 0 to 147912 May 8 00:39:42.000574 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 8 00:39:42.001561 (sd-merge)[1200]: Merged extensions into '/usr'. May 8 00:39:42.011225 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:39:42.011240 systemd[1]: Reloading... May 8 00:39:42.120921 zram_generator::config[1227]: No configuration found. May 8 00:39:42.207028 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:39:42.245974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:42.302630 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:39:42.303225 systemd[1]: Reloading finished in 291 ms. May 8 00:39:42.322874 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:39:42.323984 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:39:42.325008 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:39:42.342532 systemd[1]: Starting ensure-sysext.service... May 8 00:39:42.346897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:42.352905 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:42.366527 systemd[1]: Reload requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... May 8 00:39:42.366541 systemd[1]: Reloading... May 8 00:39:42.389296 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:39:42.389569 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:39:42.390636 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:39:42.392858 systemd-udevd[1274]: Using default interface naming scheme 'v255'. May 8 00:39:42.393162 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 8 00:39:42.393242 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 8 00:39:42.401593 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:42.401604 systemd-tmpfiles[1273]: Skipping /boot May 8 00:39:42.418040 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:42.418056 systemd-tmpfiles[1273]: Skipping /boot May 8 00:39:42.469796 zram_generator::config[1303]: No configuration found. May 8 00:39:42.640794 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1317) May 8 00:39:42.648795 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:39:42.652253 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:42.692959 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:39:42.693944 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:39:42.694229 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:39:42.694418 kernel: ACPI: button: Power Button [PWRF] May 8 00:39:42.726914 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:39:42.727338 systemd[1]: Reloading finished in 360 ms. May 8 00:39:42.737051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:42.739805 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:42.754796 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:39:42.765529 kernel: EDAC MC: Ver: 3.0.0 May 8 00:39:42.808776 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:39:42.811365 systemd[1]: Finished ensure-sysext.service. May 8 00:39:42.831670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:39:42.834501 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:39:42.838118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:42.842935 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:39:42.846881 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:39:42.847572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:42.850945 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:39:42.853082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:42.856904 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:42.865117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:42.882926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:42.883820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:42.887579 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:42.886993 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:39:42.887858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:39:42.894891 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:39:42.904899 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:42.911907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:42.915518 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:39:42.921600 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:39:42.926879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:42.927816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:42.929480 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:39:42.932143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:42.932350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:42.933412 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:42.933970 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:42.935509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:42.936820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:42.937702 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:42.937971 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:42.939262 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:39:42.953833 augenrules[1421]: No rules May 8 00:39:42.955330 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:39:42.955977 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:39:42.964072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:42.971729 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:39:42.972432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:42.972578 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:42.976822 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:39:42.981275 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:39:42.984181 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:39:42.987796 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:42.992913 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:39:42.993780 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:39:42.996593 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:39:43.014979 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:39:43.021063 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:39:43.041534 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:39:43.084093 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:43.147543 systemd-networkd[1407]: lo: Link UP May 8 00:39:43.147902 systemd-networkd[1407]: lo: Gained carrier May 8 00:39:43.149575 systemd-networkd[1407]: Enumeration completed May 8 00:39:43.149714 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:43.151935 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:43.152008 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:43.153279 systemd-networkd[1407]: eth0: Link UP May 8 00:39:43.153334 systemd-networkd[1407]: eth0: Gained carrier May 8 00:39:43.153393 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:43.159011 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:39:43.163921 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:39:43.166969 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:39:43.167879 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:39:43.169277 systemd-resolved[1408]: Positive Trust Anchors: May 8 00:39:43.169416 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:43.169443 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:43.172906 systemd-resolved[1408]: Defaulting to hostname 'linux'. May 8 00:39:43.178879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:43.181933 systemd[1]: Reached target network.target - Network. May 8 00:39:43.182596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:43.183382 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:43.183989 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:39:43.184554 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:39:43.185306 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:39:43.186033 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:39:43.186821 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:39:43.187477 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:39:43.187506 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:43.188096 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:43.189442 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:39:43.191975 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:39:43.194996 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:39:43.195828 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:39:43.196508 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:39:43.204499 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:39:43.205600 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:39:43.207186 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:39:43.208056 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:39:43.209261 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:43.209984 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:43.210532 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:43.210568 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:43.216842 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:39:43.218910 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:39:43.220919 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:39:43.225859 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:39:43.229901 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:39:43.230436 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:39:43.233910 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:39:43.248226 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:39:43.253921 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:39:43.258923 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:39:43.263813 jq[1457]: false May 8 00:39:43.269902 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:39:43.272015 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:39:43.272439 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:39:43.275522 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:39:43.284241 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:39:43.289183 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:39:43.289417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:39:43.294164 extend-filesystems[1458]: Found loop4 May 8 00:39:43.295401 extend-filesystems[1458]: Found loop5 May 8 00:39:43.295401 extend-filesystems[1458]: Found loop6 May 8 00:39:43.295401 extend-filesystems[1458]: Found loop7 May 8 00:39:43.295401 extend-filesystems[1458]: Found sda May 8 00:39:43.295401 extend-filesystems[1458]: Found sda1 May 8 00:39:43.295401 extend-filesystems[1458]: Found sda2 May 8 00:39:43.295401 extend-filesystems[1458]: Found sda3 May 8 00:39:43.295401 extend-filesystems[1458]: Found usr May 8 00:39:43.295401 extend-filesystems[1458]: Found sda4 May 8 00:39:43.295401 extend-filesystems[1458]: Found sda6 May 8 00:39:43.295401 extend-filesystems[1458]: Found sda7 May 8 00:39:43.295401 extend-filesystems[1458]: Found sda9 May 8 00:39:43.295401 extend-filesystems[1458]: Checking size of /dev/sda9 May 8 00:39:43.318134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:39:43.319259 jq[1470]: true May 8 00:39:43.319843 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:39:43.327403 extend-filesystems[1458]: Resized partition /dev/sda9 May 8 00:39:43.332946 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) May 8 00:39:43.407349 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 8 00:39:43.407549 update_engine[1469]: I20250508 00:39:43.393867 1469 main.cc:92] Flatcar Update Engine starting May 8 00:39:43.407549 update_engine[1469]: I20250508 00:39:43.397087 1469 update_check_scheduler.cc:74] Next update check in 9m29s May 8 00:39:43.382247 dbus-daemon[1456]: [system] SELinux support is enabled May 8 00:39:43.379039 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:39:43.409083 tar[1476]: linux-amd64/LICENSE May 8 00:39:43.409083 tar[1476]: linux-amd64/helm May 8 00:39:43.379301 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:39:43.409921 jq[1486]: true May 8 00:39:43.415263 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:39:43.415564 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:39:43.423164 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:39:43.423218 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:39:43.424877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:39:43.424905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:39:43.427729 systemd[1]: Started update-engine.service - Update Engine. May 8 00:39:43.439458 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:39:43.457274 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:39:43.464267 coreos-metadata[1455]: May 08 00:39:43.464 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:39:43.478986 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1316) May 8 00:39:43.545244 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:39:43.545274 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:39:43.549079 systemd-logind[1468]: New seat seat0. May 8 00:39:43.550082 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:39:43.557129 bash[1518]: Updated "/home/core/.ssh/authorized_keys" May 8 00:39:43.558497 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:39:43.560977 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:39:43.569148 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:39:43.579228 systemd[1]: Starting sshkeys.service... May 8 00:39:43.620166 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:39:43.620620 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:39:43.632892 systemd-networkd[1407]: eth0: DHCPv4 address 172.237.145.87/24, gateway 172.237.145.1 acquired from 23.215.119.0 May 8 00:39:43.636016 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. May 8 00:39:43.641474 dbus-daemon[1456]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1407 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 8 00:39:43.665823 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:39:43.674001 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:39:43.684584 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 8 00:39:43.688003 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:39:43.714798 containerd[1491]: time="2025-05-08T00:39:43.713118233Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:39:43.748849 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:39:43.756327 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:39:43.757830 containerd[1491]: time="2025-05-08T00:39:43.757059905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:43.759085 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:39:43.761116 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:39:43.764043 containerd[1491]: time="2025-05-08T00:39:43.763976998Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:43.764043 containerd[1491]: time="2025-05-08T00:39:43.764020488Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:39:43.764043 containerd[1491]: time="2025-05-08T00:39:43.764036788Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:39:43.764527 containerd[1491]: time="2025-05-08T00:39:43.764199748Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:39:43.764527 containerd[1491]: time="2025-05-08T00:39:43.764220568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:39:43.764527 containerd[1491]: time="2025-05-08T00:39:43.764282818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:43.764527 containerd[1491]: time="2025-05-08T00:39:43.764293849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:43.764601 containerd[1491]: time="2025-05-08T00:39:43.764529199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:43.764601 containerd[1491]: time="2025-05-08T00:39:43.764543479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:39:43.764601 containerd[1491]: time="2025-05-08T00:39:43.764554889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:43.764601 containerd[1491]: time="2025-05-08T00:39:43.764563179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:39:43.764686 containerd[1491]: time="2025-05-08T00:39:43.764659249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:43.765623 containerd[1491]: time="2025-05-08T00:39:43.764899299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:43.765623 containerd[1491]: time="2025-05-08T00:39:43.765052369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:43.765623 containerd[1491]: time="2025-05-08T00:39:43.765063859Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:39:43.765623 containerd[1491]: time="2025-05-08T00:39:43.765153129Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:39:43.765623 containerd[1491]: time="2025-05-08T00:39:43.765204679Z" level=info msg="metadata content store policy set" policy=shared May 8 00:39:43.776398 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:39:43.780172 containerd[1491]: time="2025-05-08T00:39:43.780018286Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:39:43.780172 containerd[1491]: time="2025-05-08T00:39:43.780082866Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:39:43.780719 containerd[1491]: time="2025-05-08T00:39:43.780105796Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:39:43.780719 containerd[1491]: time="2025-05-08T00:39:43.780298197Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:39:43.780719 containerd[1491]: time="2025-05-08T00:39:43.780312007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:39:43.780719 containerd[1491]: time="2025-05-08T00:39:43.780429527Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781319117Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781444337Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781458607Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781470237Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781482147Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781492727Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781503117Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781513857Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781526967Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781537657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781548187Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781557897Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781573867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782225 containerd[1491]: time="2025-05-08T00:39:43.781585217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781595597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781606457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781622127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781633327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781642307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781651947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781663187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781688967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781698877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781708437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781719007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781730237Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781746927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781772207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:39:43.782481 containerd[1491]: time="2025-05-08T00:39:43.781783137Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.783618888Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.783641168Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.783652188Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.783962828Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.783974798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.783986588Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.783995368Z" level=info msg="NRI interface is disabled by configuration." May 8 00:39:43.785813 containerd[1491]: time="2025-05-08T00:39:43.784005568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:39:43.785974 containerd[1491]: time="2025-05-08T00:39:43.784235308Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:39:43.785974 containerd[1491]: time="2025-05-08T00:39:43.784274018Z" level=info msg="Connect containerd service" May 8 00:39:43.785974 containerd[1491]: time="2025-05-08T00:39:43.784294648Z" level=info msg="using legacy CRI server" May 8 00:39:43.785974 containerd[1491]: time="2025-05-08T00:39:43.784300779Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:39:43.785974 containerd[1491]: time="2025-05-08T00:39:43.784406059Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:39:43.785974 containerd[1491]: time="2025-05-08T00:39:43.784903169Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:39:43.786543 containerd[1491]: time="2025-05-08T00:39:43.786512520Z" level=info msg="Start subscribing containerd event" May 8 00:39:43.786685 containerd[1491]: time="2025-05-08T00:39:43.786671720Z" level=info msg="Start recovering state" May 8 00:39:43.786976 containerd[1491]: time="2025-05-08T00:39:43.786963370Z" level=info msg="Start event monitor" May 8 00:39:43.787037 containerd[1491]: time="2025-05-08T00:39:43.787025530Z" level=info msg="Start snapshots syncer" May 8 00:39:43.787078 containerd[1491]: time="2025-05-08T00:39:43.787067970Z" level=info msg="Start cni network conf syncer for default" May 8 00:39:43.787130 containerd[1491]: time="2025-05-08T00:39:43.787104820Z" level=info msg="Start streaming server" May 8 00:39:43.787676 containerd[1491]: time="2025-05-08T00:39:43.787600960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:39:43.787884 containerd[1491]: time="2025-05-08T00:39:43.787869770Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:39:43.798165 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 8 00:39:43.798192 containerd[1491]: time="2025-05-08T00:39:43.792335993Z" level=info msg="containerd successfully booted in 0.081649s" May 8 00:39:43.788219 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:39:43.802084 extend-filesystems[1490]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 8 00:39:43.802084 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 10 May 8 00:39:43.802084 extend-filesystems[1490]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 8 00:39:43.801044 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:39:43.808911 coreos-metadata[1535]: May 08 00:39:43.800 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:39:43.809133 extend-filesystems[1458]: Resized filesystem in /dev/sda9 May 8 00:39:43.801304 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:39:43.834483 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 8 00:39:43.835360 dbus-daemon[1456]: [system] Successfully activated service 'org.freedesktop.hostname1' May 8 00:39:43.836148 dbus-daemon[1456]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1536 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 8 00:39:43.846617 systemd[1]: Starting polkit.service - Authorization Manager... May 8 00:39:43.855730 polkitd[1554]: Started polkitd version 121 May 8 00:39:43.859922 polkitd[1554]: Loading rules from directory /etc/polkit-1/rules.d May 8 00:39:43.860035 polkitd[1554]: Loading rules from directory /usr/share/polkit-1/rules.d May 8 00:39:43.860500 polkitd[1554]: Finished loading, compiling and executing 2 rules May 8 00:39:43.860876 dbus-daemon[1456]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 8 00:39:43.861137 systemd[1]: Started polkit.service - Authorization Manager. May 8 00:39:43.861932 polkitd[1554]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 8 00:39:43.870922 systemd-resolved[1408]: System hostname changed to '172-237-145-87'. May 8 00:39:43.871096 systemd-hostnamed[1536]: Hostname set to <172-237-145-87> (transient) May 8 00:39:43.894169 coreos-metadata[1535]: May 08 00:39:43.894 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 8 00:39:43.906919 systemd-timesyncd[1409]: Contacted time server 198.60.22.240:123 (0.flatcar.pool.ntp.org). May 8 00:39:43.906974 systemd-timesyncd[1409]: Initial clock synchronization to Thu 2025-05-08 00:39:44.230153 UTC. May 8 00:39:44.028244 coreos-metadata[1535]: May 08 00:39:44.028 INFO Fetch successful May 8 00:39:44.049618 update-ssh-keys[1565]: Updated "/home/core/.ssh/authorized_keys" May 8 00:39:44.050981 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:39:44.055112 systemd[1]: Finished sshkeys.service. May 8 00:39:44.087738 tar[1476]: linux-amd64/README.md May 8 00:39:44.099311 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:39:44.472149 coreos-metadata[1455]: May 08 00:39:44.472 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 8 00:39:44.563905 coreos-metadata[1455]: May 08 00:39:44.563 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 8 00:39:44.825890 coreos-metadata[1455]: May 08 00:39:44.825 INFO Fetch successful May 8 00:39:44.826083 coreos-metadata[1455]: May 08 00:39:44.826 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 8 00:39:44.939864 systemd-networkd[1407]: eth0: Gained IPv6LL May 8 00:39:44.942709 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:39:44.943863 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:39:44.949985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:44.952891 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:39:44.978539 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:39:45.122110 coreos-metadata[1455]: May 08 00:39:45.120 INFO Fetch successful May 8 00:39:45.198856 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:39:45.199896 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:39:45.845938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:45.847032 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:39:45.849178 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:45.886363 systemd[1]: Startup finished in 816ms (kernel) + 6.892s (initrd) + 5.346s (userspace) = 13.054s. May 8 00:39:46.370007 kubelet[1609]: E0508 00:39:46.369938 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:46.374154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:46.374374 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:46.375035 systemd[1]: kubelet.service: Consumed 872ms CPU time, 254.6M memory peak. May 8 00:39:48.142267 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:39:48.149090 systemd[1]: Started sshd@0-172.237.145.87:22-139.178.89.65:51640.service - OpenSSH per-connection server daemon (139.178.89.65:51640). May 8 00:39:48.489176 sshd[1621]: Accepted publickey for core from 139.178.89.65 port 51640 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:48.491522 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:48.498863 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:39:48.511267 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:39:48.518387 systemd-logind[1468]: New session 1 of user core. May 8 00:39:48.524278 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:39:48.532070 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:39:48.535420 (systemd)[1625]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:48.538060 systemd-logind[1468]: New session c1 of user core. May 8 00:39:48.674089 systemd[1625]: Queued start job for default target default.target. May 8 00:39:48.681127 systemd[1625]: Created slice app.slice - User Application Slice. May 8 00:39:48.681156 systemd[1625]: Reached target paths.target - Paths. May 8 00:39:48.681202 systemd[1625]: Reached target timers.target - Timers. May 8 00:39:48.682735 systemd[1625]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:39:48.694672 systemd[1625]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:39:48.694840 systemd[1625]: Reached target sockets.target - Sockets. May 8 00:39:48.694885 systemd[1625]: Reached target basic.target - Basic System. May 8 00:39:48.694930 systemd[1625]: Reached target default.target - Main User Target. May 8 00:39:48.694965 systemd[1625]: Startup finished in 150ms. May 8 00:39:48.695151 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:39:48.706935 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:39:48.984028 systemd[1]: Started sshd@1-172.237.145.87:22-139.178.89.65:51644.service - OpenSSH per-connection server daemon (139.178.89.65:51644). May 8 00:39:49.320773 sshd[1636]: Accepted publickey for core from 139.178.89.65 port 51644 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:49.322506 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:49.326558 systemd-logind[1468]: New session 2 of user core. May 8 00:39:49.333972 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:39:49.569831 sshd[1638]: Connection closed by 139.178.89.65 port 51644 May 8 00:39:49.570479 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 8 00:39:49.575324 systemd[1]: sshd@1-172.237.145.87:22-139.178.89.65:51644.service: Deactivated successfully. May 8 00:39:49.578392 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:39:49.579528 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. May 8 00:39:49.580531 systemd-logind[1468]: Removed session 2. May 8 00:39:49.640006 systemd[1]: Started sshd@2-172.237.145.87:22-139.178.89.65:51654.service - OpenSSH per-connection server daemon (139.178.89.65:51654). May 8 00:39:49.979987 sshd[1644]: Accepted publickey for core from 139.178.89.65 port 51654 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:49.983258 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:49.988024 systemd-logind[1468]: New session 3 of user core. May 8 00:39:49.995926 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:39:50.232718 sshd[1646]: Connection closed by 139.178.89.65 port 51654 May 8 00:39:50.233571 sshd-session[1644]: pam_unix(sshd:session): session closed for user core May 8 00:39:50.237463 systemd[1]: sshd@2-172.237.145.87:22-139.178.89.65:51654.service: Deactivated successfully. May 8 00:39:50.239811 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:39:50.241541 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. May 8 00:39:50.242605 systemd-logind[1468]: Removed session 3. May 8 00:39:50.298987 systemd[1]: Started sshd@3-172.237.145.87:22-139.178.89.65:51656.service - OpenSSH per-connection server daemon (139.178.89.65:51656). May 8 00:39:50.636147 sshd[1652]: Accepted publickey for core from 139.178.89.65 port 51656 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:50.637848 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:50.642156 systemd-logind[1468]: New session 4 of user core. May 8 00:39:50.652095 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:39:50.890055 sshd[1654]: Connection closed by 139.178.89.65 port 51656 May 8 00:39:50.890601 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 8 00:39:50.893390 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. May 8 00:39:50.893882 systemd[1]: sshd@3-172.237.145.87:22-139.178.89.65:51656.service: Deactivated successfully. May 8 00:39:50.895731 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:39:50.897500 systemd-logind[1468]: Removed session 4. May 8 00:39:50.957969 systemd[1]: Started sshd@4-172.237.145.87:22-139.178.89.65:51662.service - OpenSSH per-connection server daemon (139.178.89.65:51662). May 8 00:39:51.308113 sshd[1660]: Accepted publickey for core from 139.178.89.65 port 51662 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:51.309313 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:51.313303 systemd-logind[1468]: New session 5 of user core. May 8 00:39:51.322898 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:39:51.517264 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:39:51.517599 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:51.535049 sudo[1663]: pam_unix(sudo:session): session closed for user root May 8 00:39:51.589961 sshd[1662]: Connection closed by 139.178.89.65 port 51662 May 8 00:39:51.591148 sshd-session[1660]: pam_unix(sshd:session): session closed for user core May 8 00:39:51.594153 systemd[1]: sshd@4-172.237.145.87:22-139.178.89.65:51662.service: Deactivated successfully. May 8 00:39:51.596342 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:39:51.597841 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. May 8 00:39:51.599257 systemd-logind[1468]: Removed session 5. May 8 00:39:51.655992 systemd[1]: Started sshd@5-172.237.145.87:22-139.178.89.65:51674.service - OpenSSH per-connection server daemon (139.178.89.65:51674). May 8 00:39:51.985869 sshd[1669]: Accepted publickey for core from 139.178.89.65 port 51674 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:51.987175 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:51.991261 systemd-logind[1468]: New session 6 of user core. May 8 00:39:51.997904 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:39:52.184580 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:39:52.184970 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:52.188535 sudo[1673]: pam_unix(sudo:session): session closed for user root May 8 00:39:52.193990 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:39:52.194295 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:52.207000 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:39:52.232965 augenrules[1695]: No rules May 8 00:39:52.234186 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:39:52.234443 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:39:52.235475 sudo[1672]: pam_unix(sudo:session): session closed for user root May 8 00:39:52.286807 sshd[1671]: Connection closed by 139.178.89.65 port 51674 May 8 00:39:52.287440 sshd-session[1669]: pam_unix(sshd:session): session closed for user core May 8 00:39:52.290874 systemd[1]: sshd@5-172.237.145.87:22-139.178.89.65:51674.service: Deactivated successfully. May 8 00:39:52.292656 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:39:52.293301 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. May 8 00:39:52.294107 systemd-logind[1468]: Removed session 6. May 8 00:39:52.357007 systemd[1]: Started sshd@6-172.237.145.87:22-139.178.89.65:51680.service - OpenSSH per-connection server daemon (139.178.89.65:51680). May 8 00:39:52.692518 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 51680 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:39:52.694003 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:52.698381 systemd-logind[1468]: New session 7 of user core. May 8 00:39:52.705919 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:39:52.896152 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:39:52.896484 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:53.155988 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:39:53.156136 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:39:53.440292 dockerd[1723]: time="2025-05-08T00:39:53.440144741Z" level=info msg="Starting up" May 8 00:39:53.525318 dockerd[1723]: time="2025-05-08T00:39:53.525280561Z" level=info msg="Loading containers: start." May 8 00:39:53.690801 kernel: Initializing XFRM netlink socket May 8 00:39:53.770623 systemd-networkd[1407]: docker0: Link UP May 8 00:39:53.798122 dockerd[1723]: time="2025-05-08T00:39:53.798076955Z" level=info msg="Loading containers: done." May 8 00:39:53.810097 dockerd[1723]: time="2025-05-08T00:39:53.809654793Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:39:53.810097 dockerd[1723]: time="2025-05-08T00:39:53.809737283Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:39:53.810097 dockerd[1723]: time="2025-05-08T00:39:53.809875968Z" level=info msg="Daemon has completed initialization" May 8 00:39:53.837427 dockerd[1723]: time="2025-05-08T00:39:53.837392636Z" level=info msg="API listen on /run/docker.sock" May 8 00:39:53.837518 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:39:54.510651 containerd[1491]: time="2025-05-08T00:39:54.510600230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:39:55.311559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984027269.mount: Deactivated successfully. May 8 00:39:56.477112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:39:56.485622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:56.656975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:56.660712 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:56.706097 kubelet[1971]: E0508 00:39:56.706055 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:56.712225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:56.712415 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:56.712838 systemd[1]: kubelet.service: Consumed 172ms CPU time, 104.4M memory peak. May 8 00:39:56.897000 containerd[1491]: time="2025-05-08T00:39:56.896624284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:56.897998 containerd[1491]: time="2025-05-08T00:39:56.897731516Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 8 00:39:56.898948 containerd[1491]: time="2025-05-08T00:39:56.898559836Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:56.901053 containerd[1491]: time="2025-05-08T00:39:56.901020730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:56.902049 containerd[1491]: time="2025-05-08T00:39:56.902026895Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.391391726s" May 8 00:39:56.902127 containerd[1491]: time="2025-05-08T00:39:56.902111811Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:39:56.902785 containerd[1491]: time="2025-05-08T00:39:56.902735841Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:39:58.660836 containerd[1491]: time="2025-05-08T00:39:58.660715277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:58.662364 containerd[1491]: time="2025-05-08T00:39:58.662201022Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 8 00:39:58.663238 containerd[1491]: time="2025-05-08T00:39:58.662879787Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:58.665648 containerd[1491]: time="2025-05-08T00:39:58.665611299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:58.666665 containerd[1491]: time="2025-05-08T00:39:58.666628369Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.763820057s" May 8 00:39:58.666709 containerd[1491]: time="2025-05-08T00:39:58.666666425Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:39:58.667405 containerd[1491]: time="2025-05-08T00:39:58.667372620Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:40:00.097159 containerd[1491]: time="2025-05-08T00:40:00.097100292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:00.098126 containerd[1491]: time="2025-05-08T00:40:00.098093951Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 8 00:40:00.098590 containerd[1491]: time="2025-05-08T00:40:00.098550923Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:00.101714 containerd[1491]: time="2025-05-08T00:40:00.100707848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:00.101714 containerd[1491]: time="2025-05-08T00:40:00.101588531Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.434027202s" May 8 00:40:00.101714 containerd[1491]: time="2025-05-08T00:40:00.101610897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:40:00.102252 containerd[1491]: time="2025-05-08T00:40:00.102225818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:40:01.308174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230779024.mount: Deactivated successfully. May 8 00:40:01.634263 containerd[1491]: time="2025-05-08T00:40:01.633597782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:01.634263 containerd[1491]: time="2025-05-08T00:40:01.634173415Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 8 00:40:01.634891 containerd[1491]: time="2025-05-08T00:40:01.634869441Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:01.636831 containerd[1491]: time="2025-05-08T00:40:01.636807019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:01.637613 containerd[1491]: time="2025-05-08T00:40:01.637581570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.535325383s" May 8 00:40:01.637655 containerd[1491]: time="2025-05-08T00:40:01.637614135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:40:01.638585 containerd[1491]: time="2025-05-08T00:40:01.638566265Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:40:02.397886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983832415.mount: Deactivated successfully. May 8 00:40:03.122266 containerd[1491]: time="2025-05-08T00:40:03.122166479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:03.123659 containerd[1491]: time="2025-05-08T00:40:03.123621778Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 8 00:40:03.125775 containerd[1491]: time="2025-05-08T00:40:03.124576530Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:03.129913 containerd[1491]: time="2025-05-08T00:40:03.129883877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:03.130907 containerd[1491]: time="2025-05-08T00:40:03.130883593Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.49229203s" May 8 00:40:03.130907 containerd[1491]: time="2025-05-08T00:40:03.130911973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:40:03.132008 containerd[1491]: time="2025-05-08T00:40:03.131985706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:40:03.883275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386399990.mount: Deactivated successfully. May 8 00:40:03.888194 containerd[1491]: time="2025-05-08T00:40:03.888130633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:03.888952 containerd[1491]: time="2025-05-08T00:40:03.888911432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:40:03.889435 containerd[1491]: time="2025-05-08T00:40:03.889372762Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:03.891284 containerd[1491]: time="2025-05-08T00:40:03.891243746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:03.892689 containerd[1491]: time="2025-05-08T00:40:03.892055443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 760.04218ms" May 8 00:40:03.892689 containerd[1491]: time="2025-05-08T00:40:03.892092552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:40:03.893008 containerd[1491]: time="2025-05-08T00:40:03.892973961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:40:04.712361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766698057.mount: Deactivated successfully. May 8 00:40:06.325224 containerd[1491]: time="2025-05-08T00:40:06.325135344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.326284 containerd[1491]: time="2025-05-08T00:40:06.326241149Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 8 00:40:06.327307 containerd[1491]: time="2025-05-08T00:40:06.326949192Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.330457 containerd[1491]: time="2025-05-08T00:40:06.330425323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:06.331890 containerd[1491]: time="2025-05-08T00:40:06.331855490Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.4388477s" May 8 00:40:06.331974 containerd[1491]: time="2025-05-08T00:40:06.331957078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:40:06.727105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:40:06.734940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:06.888951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:06.893506 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:40:06.942048 kubelet[2133]: E0508 00:40:06.940986 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:40:06.945640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:40:06.945877 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:40:06.946557 systemd[1]: kubelet.service: Consumed 176ms CPU time, 105.7M memory peak. May 8 00:40:08.171935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:08.172090 systemd[1]: kubelet.service: Consumed 176ms CPU time, 105.7M memory peak. May 8 00:40:08.180992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:08.211148 systemd[1]: Reload requested from client PID 2147 ('systemctl') (unit session-7.scope)... May 8 00:40:08.211426 systemd[1]: Reloading... May 8 00:40:08.355591 zram_generator::config[2189]: No configuration found. May 8 00:40:08.465834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:40:08.559093 systemd[1]: Reloading finished in 347 ms. May 8 00:40:08.614726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:08.620204 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:40:08.623175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:08.624678 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:40:08.625218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:08.625268 systemd[1]: kubelet.service: Consumed 131ms CPU time, 92.8M memory peak. May 8 00:40:08.632144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:08.776979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:08.781817 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:40:08.828308 kubelet[2249]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:08.830299 kubelet[2249]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:40:08.830299 kubelet[2249]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:08.830299 kubelet[2249]: I0508 00:40:08.828619 2249 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:40:09.161208 kubelet[2249]: I0508 00:40:09.161087 2249 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:40:09.161208 kubelet[2249]: I0508 00:40:09.161123 2249 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:40:09.161627 kubelet[2249]: I0508 00:40:09.161379 2249 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:40:09.185148 kubelet[2249]: E0508 00:40:09.185095 2249 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.145.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:09.186209 kubelet[2249]: I0508 00:40:09.186042 2249 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:40:09.201211 kubelet[2249]: E0508 00:40:09.201161 2249 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:40:09.201211 kubelet[2249]: I0508 00:40:09.201197 2249 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:40:09.205038 kubelet[2249]: I0508 00:40:09.205006 2249 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:40:09.205281 kubelet[2249]: I0508 00:40:09.205246 2249 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:40:09.205432 kubelet[2249]: I0508 00:40:09.205276 2249 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-145-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:40:09.205524 kubelet[2249]: I0508 00:40:09.205432 2249 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:40:09.205524 kubelet[2249]: I0508 00:40:09.205442 2249 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:40:09.205577 kubelet[2249]: I0508 00:40:09.205546 2249 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:09.208584 kubelet[2249]: I0508 00:40:09.208562 2249 kubelet.go:446] "Attempting to sync node with API server" May 8 00:40:09.208584 kubelet[2249]: I0508 00:40:09.208581 2249 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:40:09.208663 kubelet[2249]: I0508 00:40:09.208604 2249 kubelet.go:352] "Adding apiserver pod source" May 8 00:40:09.208663 kubelet[2249]: I0508 00:40:09.208614 2249 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:40:09.216145 kubelet[2249]: I0508 00:40:09.215915 2249 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:40:09.216516 kubelet[2249]: W0508 00:40:09.216283 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.145.87:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-87&limit=500&resourceVersion=0": dial tcp 172.237.145.87:6443: connect: connection refused May 8 00:40:09.216516 kubelet[2249]: E0508 00:40:09.216344 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.145.87:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-87&limit=500&resourceVersion=0\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:09.216516 kubelet[2249]: W0508 00:40:09.216410 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.145.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.145.87:6443: connect: connection refused May 8 00:40:09.216516 kubelet[2249]: E0508 00:40:09.216434 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.145.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:09.216784 kubelet[2249]: I0508 00:40:09.216753 2249 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:40:09.217611 kubelet[2249]: W0508 00:40:09.217597 2249 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:40:09.219853 kubelet[2249]: I0508 00:40:09.219838 2249 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:40:09.220523 kubelet[2249]: I0508 00:40:09.219935 2249 server.go:1287] "Started kubelet" May 8 00:40:09.221584 kubelet[2249]: I0508 00:40:09.221557 2249 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:40:09.226363 kubelet[2249]: E0508 00:40:09.224900 2249 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.145.87:6443/api/v1/namespaces/default/events\": dial tcp 172.237.145.87:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-145-87.183d6663c73a59e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-145-87,UID:172-237-145-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-145-87,},FirstTimestamp:2025-05-08 00:40:09.219906025 +0000 UTC m=+0.434185717,LastTimestamp:2025-05-08 00:40:09.219906025 +0000 UTC m=+0.434185717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-145-87,}" May 8 00:40:09.227976 kubelet[2249]: I0508 00:40:09.227028 2249 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:40:09.228087 kubelet[2249]: I0508 00:40:09.228074 2249 server.go:490] "Adding debug handlers to kubelet server" May 8 00:40:09.228797 kubelet[2249]: I0508 00:40:09.228740 2249 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:40:09.229045 kubelet[2249]: I0508 00:40:09.229031 2249 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:40:09.229242 kubelet[2249]: I0508 00:40:09.229228 2249 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:40:09.229630 kubelet[2249]: I0508 00:40:09.229601 2249 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:40:09.229865 kubelet[2249]: E0508 00:40:09.229838 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:09.231835 kubelet[2249]: E0508 00:40:09.231814 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-87?timeout=10s\": dial tcp 172.237.145.87:6443: connect: connection refused" interval="200ms" May 8 00:40:09.233366 kubelet[2249]: I0508 00:40:09.233352 2249 factory.go:221] Registration of the containerd container factory successfully May 8 00:40:09.233429 kubelet[2249]: I0508 00:40:09.233420 2249 factory.go:221] Registration of the systemd container factory successfully May 8 00:40:09.233528 kubelet[2249]: I0508 00:40:09.233513 2249 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:40:09.233704 kubelet[2249]: I0508 00:40:09.233680 2249 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:40:09.233742 kubelet[2249]: I0508 00:40:09.233729 2249 reconciler.go:26] "Reconciler: start to sync state" May 8 00:40:09.238341 kubelet[2249]: W0508 00:40:09.238312 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.145.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.145.87:6443: connect: connection refused May 8 00:40:09.240786 kubelet[2249]: E0508 00:40:09.238803 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.145.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:09.243016 kubelet[2249]: I0508 00:40:09.242979 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:40:09.250484 kubelet[2249]: I0508 00:40:09.250447 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:40:09.250484 kubelet[2249]: I0508 00:40:09.250476 2249 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:40:09.250551 kubelet[2249]: I0508 00:40:09.250495 2249 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:40:09.250551 kubelet[2249]: I0508 00:40:09.250503 2249 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:40:09.250602 kubelet[2249]: E0508 00:40:09.250558 2249 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:40:09.255573 kubelet[2249]: W0508 00:40:09.255543 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.145.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.145.87:6443: connect: connection refused May 8 00:40:09.256126 kubelet[2249]: E0508 00:40:09.256109 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.145.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:09.259633 kubelet[2249]: I0508 00:40:09.259605 2249 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:40:09.259775 kubelet[2249]: I0508 00:40:09.259742 2249 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:40:09.259872 kubelet[2249]: I0508 00:40:09.259861 2249 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:09.261406 kubelet[2249]: I0508 00:40:09.261391 2249 policy_none.go:49] "None policy: Start" May 8 00:40:09.261479 kubelet[2249]: I0508 00:40:09.261470 2249 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:40:09.261548 kubelet[2249]: I0508 00:40:09.261521 2249 state_mem.go:35] "Initializing new in-memory state store" May 8 00:40:09.269129 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:40:09.277678 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:40:09.280721 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:40:09.291552 kubelet[2249]: I0508 00:40:09.291534 2249 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:40:09.291780 kubelet[2249]: I0508 00:40:09.291747 2249 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:40:09.292259 kubelet[2249]: I0508 00:40:09.291946 2249 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:40:09.292472 kubelet[2249]: I0508 00:40:09.292459 2249 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:40:09.293572 kubelet[2249]: E0508 00:40:09.293559 2249 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:40:09.293657 kubelet[2249]: E0508 00:40:09.293647 2249 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-145-87\" not found" May 8 00:40:09.360464 systemd[1]: Created slice kubepods-burstable-poddcb3c4733e395e01f0660265f74ffab4.slice - libcontainer container kubepods-burstable-poddcb3c4733e395e01f0660265f74ffab4.slice. May 8 00:40:09.376378 kubelet[2249]: E0508 00:40:09.376344 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:09.380091 systemd[1]: Created slice kubepods-burstable-pod95078e269dfdf08beab881a72cb6d46e.slice - libcontainer container kubepods-burstable-pod95078e269dfdf08beab881a72cb6d46e.slice. May 8 00:40:09.382277 kubelet[2249]: E0508 00:40:09.382261 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:09.384142 systemd[1]: Created slice kubepods-burstable-podd32ff5db7fa4699055a47d01be5fda91.slice - libcontainer container kubepods-burstable-podd32ff5db7fa4699055a47d01be5fda91.slice. May 8 00:40:09.385809 kubelet[2249]: E0508 00:40:09.385789 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:09.393942 kubelet[2249]: I0508 00:40:09.393913 2249 kubelet_node_status.go:76] "Attempting to register node" node="172-237-145-87" May 8 00:40:09.394196 kubelet[2249]: E0508 00:40:09.394164 2249 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.237.145.87:6443/api/v1/nodes\": dial tcp 172.237.145.87:6443: connect: connection refused" node="172-237-145-87" May 8 00:40:09.432941 kubelet[2249]: E0508 00:40:09.432848 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-87?timeout=10s\": dial tcp 172.237.145.87:6443: connect: connection refused" interval="400ms" May 8 00:40:09.434036 kubelet[2249]: I0508 00:40:09.434006 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcb3c4733e395e01f0660265f74ffab4-ca-certs\") pod \"kube-apiserver-172-237-145-87\" (UID: \"dcb3c4733e395e01f0660265f74ffab4\") " pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:09.434085 kubelet[2249]: I0508 00:40:09.434034 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcb3c4733e395e01f0660265f74ffab4-k8s-certs\") pod \"kube-apiserver-172-237-145-87\" (UID: \"dcb3c4733e395e01f0660265f74ffab4\") " pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:09.434085 kubelet[2249]: I0508 00:40:09.434054 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcb3c4733e395e01f0660265f74ffab4-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-145-87\" (UID: \"dcb3c4733e395e01f0660265f74ffab4\") " pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:09.434085 kubelet[2249]: I0508 00:40:09.434072 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-kubeconfig\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:09.434163 kubelet[2249]: I0508 00:40:09.434085 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-ca-certs\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:09.434163 kubelet[2249]: I0508 00:40:09.434099 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-flexvolume-dir\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:09.434163 kubelet[2249]: I0508 00:40:09.434111 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-k8s-certs\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:09.434163 kubelet[2249]: I0508 00:40:09.434124 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:09.434163 kubelet[2249]: I0508 00:40:09.434137 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95078e269dfdf08beab881a72cb6d46e-kubeconfig\") pod \"kube-scheduler-172-237-145-87\" (UID: \"95078e269dfdf08beab881a72cb6d46e\") " pod="kube-system/kube-scheduler-172-237-145-87" May 8 00:40:09.596416 kubelet[2249]: I0508 00:40:09.596393 2249 kubelet_node_status.go:76] "Attempting to register node" node="172-237-145-87" May 8 00:40:09.596684 kubelet[2249]: E0508 00:40:09.596663 2249 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.237.145.87:6443/api/v1/nodes\": dial tcp 172.237.145.87:6443: connect: connection refused" node="172-237-145-87" May 8 00:40:09.677422 kubelet[2249]: E0508 00:40:09.677402 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:09.677919 containerd[1491]: time="2025-05-08T00:40:09.677887804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-145-87,Uid:dcb3c4733e395e01f0660265f74ffab4,Namespace:kube-system,Attempt:0,}" May 8 00:40:09.683181 kubelet[2249]: E0508 00:40:09.683101 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:09.683562 containerd[1491]: time="2025-05-08T00:40:09.683381167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-145-87,Uid:95078e269dfdf08beab881a72cb6d46e,Namespace:kube-system,Attempt:0,}" May 8 00:40:09.686875 kubelet[2249]: E0508 00:40:09.686854 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:09.687112 containerd[1491]: time="2025-05-08T00:40:09.687092126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-145-87,Uid:d32ff5db7fa4699055a47d01be5fda91,Namespace:kube-system,Attempt:0,}" May 8 00:40:09.833934 kubelet[2249]: E0508 00:40:09.833875 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-87?timeout=10s\": dial tcp 172.237.145.87:6443: connect: connection refused" interval="800ms" May 8 00:40:09.999014 kubelet[2249]: I0508 00:40:09.998996 2249 kubelet_node_status.go:76] "Attempting to register node" node="172-237-145-87" May 8 00:40:09.999283 kubelet[2249]: E0508 00:40:09.999231 2249 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.237.145.87:6443/api/v1/nodes\": dial tcp 172.237.145.87:6443: connect: connection refused" node="172-237-145-87" May 8 00:40:10.105482 kubelet[2249]: W0508 00:40:10.105401 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.145.87:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-87&limit=500&resourceVersion=0": dial tcp 172.237.145.87:6443: connect: connection refused May 8 00:40:10.105555 kubelet[2249]: E0508 00:40:10.105486 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.145.87:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-87&limit=500&resourceVersion=0\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:10.220066 kubelet[2249]: W0508 00:40:10.219479 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.145.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.145.87:6443: connect: connection refused May 8 00:40:10.220066 kubelet[2249]: E0508 00:40:10.219539 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.145.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:10.237134 kubelet[2249]: W0508 00:40:10.237078 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.145.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.145.87:6443: connect: connection refused May 8 00:40:10.237134 kubelet[2249]: E0508 00:40:10.237108 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.145.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.145.87:6443: connect: connection refused" logger="UnhandledError" May 8 00:40:10.360234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886061496.mount: Deactivated successfully. May 8 00:40:10.363834 containerd[1491]: time="2025-05-08T00:40:10.363796017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:10.364831 containerd[1491]: time="2025-05-08T00:40:10.364739960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:10.365630 containerd[1491]: time="2025-05-08T00:40:10.365599539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:40:10.366029 containerd[1491]: time="2025-05-08T00:40:10.366001703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:40:10.367218 containerd[1491]: time="2025-05-08T00:40:10.367161648Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:10.368188 containerd[1491]: time="2025-05-08T00:40:10.368122984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:40:10.370209 containerd[1491]: time="2025-05-08T00:40:10.370182397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:10.371858 containerd[1491]: time="2025-05-08T00:40:10.370926540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:40:10.371858 containerd[1491]: time="2025-05-08T00:40:10.371662114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 684.443438ms" May 8 00:40:10.372917 containerd[1491]: time="2025-05-08T00:40:10.372896492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 689.461168ms" May 8 00:40:10.373858 containerd[1491]: time="2025-05-08T00:40:10.373815425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.84197ms" May 8 00:40:10.476688 containerd[1491]: time="2025-05-08T00:40:10.476484893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:10.476688 containerd[1491]: time="2025-05-08T00:40:10.476533624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:10.476688 containerd[1491]: time="2025-05-08T00:40:10.476546800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:10.476688 containerd[1491]: time="2025-05-08T00:40:10.476616127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:10.483980 containerd[1491]: time="2025-05-08T00:40:10.483717354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:10.485832 containerd[1491]: time="2025-05-08T00:40:10.485581393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:10.487334 containerd[1491]: time="2025-05-08T00:40:10.485377066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:10.487334 containerd[1491]: time="2025-05-08T00:40:10.486917259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:10.487334 containerd[1491]: time="2025-05-08T00:40:10.486983722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:10.487500 containerd[1491]: time="2025-05-08T00:40:10.487046631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:10.487500 containerd[1491]: time="2025-05-08T00:40:10.487137895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:10.487811 containerd[1491]: time="2025-05-08T00:40:10.487630693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:10.500939 systemd[1]: Started cri-containerd-7cf826857bc13faf923e85cab4cd79b7e3fc72d1a6f2abad8b619196315da22d.scope - libcontainer container 7cf826857bc13faf923e85cab4cd79b7e3fc72d1a6f2abad8b619196315da22d. May 8 00:40:10.526871 systemd[1]: Started cri-containerd-9a647d634ed1d277960717bf59f37276a82fc4e5be332156a55ade1460fc40ef.scope - libcontainer container 9a647d634ed1d277960717bf59f37276a82fc4e5be332156a55ade1460fc40ef. May 8 00:40:10.530160 systemd[1]: Started cri-containerd-3e5508a71ce96c1529a2259865ae38a7ffedfcc0a65d5ee94c83b30313cf66d4.scope - libcontainer container 3e5508a71ce96c1529a2259865ae38a7ffedfcc0a65d5ee94c83b30313cf66d4. May 8 00:40:10.573657 containerd[1491]: time="2025-05-08T00:40:10.573602826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-145-87,Uid:d32ff5db7fa4699055a47d01be5fda91,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cf826857bc13faf923e85cab4cd79b7e3fc72d1a6f2abad8b619196315da22d\"" May 8 00:40:10.575563 kubelet[2249]: E0508 00:40:10.575531 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:10.581219 containerd[1491]: time="2025-05-08T00:40:10.581057478Z" level=info msg="CreateContainer within sandbox \"7cf826857bc13faf923e85cab4cd79b7e3fc72d1a6f2abad8b619196315da22d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:40:10.595087 containerd[1491]: time="2025-05-08T00:40:10.594552946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-145-87,Uid:95078e269dfdf08beab881a72cb6d46e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a647d634ed1d277960717bf59f37276a82fc4e5be332156a55ade1460fc40ef\"" May 8 00:40:10.595830 kubelet[2249]: E0508 00:40:10.595738 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:10.598426 containerd[1491]: time="2025-05-08T00:40:10.598405189Z" level=info msg="CreateContainer within sandbox \"9a647d634ed1d277960717bf59f37276a82fc4e5be332156a55ade1460fc40ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:40:10.601252 containerd[1491]: time="2025-05-08T00:40:10.600999893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-145-87,Uid:dcb3c4733e395e01f0660265f74ffab4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e5508a71ce96c1529a2259865ae38a7ffedfcc0a65d5ee94c83b30313cf66d4\"" May 8 00:40:10.602231 kubelet[2249]: E0508 00:40:10.602215 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:10.603671 containerd[1491]: time="2025-05-08T00:40:10.603645181Z" level=info msg="CreateContainer within sandbox \"3e5508a71ce96c1529a2259865ae38a7ffedfcc0a65d5ee94c83b30313cf66d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:40:10.605210 containerd[1491]: time="2025-05-08T00:40:10.605105233Z" level=info msg="CreateContainer within sandbox \"7cf826857bc13faf923e85cab4cd79b7e3fc72d1a6f2abad8b619196315da22d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42572273f6fc7a542b633a62d01189409f4982a2f0cc9ed589c705db085b6372\"" May 8 00:40:10.605705 containerd[1491]: time="2025-05-08T00:40:10.605674697Z" level=info msg="StartContainer for \"42572273f6fc7a542b633a62d01189409f4982a2f0cc9ed589c705db085b6372\"" May 8 00:40:10.613753 containerd[1491]: time="2025-05-08T00:40:10.613619863Z" level=info msg="CreateContainer within sandbox \"9a647d634ed1d277960717bf59f37276a82fc4e5be332156a55ade1460fc40ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"88bee72183829199a08c9491f76d147728ae27e1be85415c7da3367bf762f67b\"" May 8 00:40:10.615476 containerd[1491]: time="2025-05-08T00:40:10.614556479Z" level=info msg="StartContainer for \"88bee72183829199a08c9491f76d147728ae27e1be85415c7da3367bf762f67b\"" May 8 00:40:10.622933 containerd[1491]: time="2025-05-08T00:40:10.622891604Z" level=info msg="CreateContainer within sandbox \"3e5508a71ce96c1529a2259865ae38a7ffedfcc0a65d5ee94c83b30313cf66d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d77720763bcc36acbf5a878bf2c8549020da29a1f2c461fea862168e8450ef7\"" May 8 00:40:10.623729 containerd[1491]: time="2025-05-08T00:40:10.623691167Z" level=info msg="StartContainer for \"6d77720763bcc36acbf5a878bf2c8549020da29a1f2c461fea862168e8450ef7\"" May 8 00:40:10.634507 kubelet[2249]: E0508 00:40:10.634473 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-87?timeout=10s\": dial tcp 172.237.145.87:6443: connect: connection refused" interval="1.6s" May 8 00:40:10.656105 systemd[1]: Started cri-containerd-6d77720763bcc36acbf5a878bf2c8549020da29a1f2c461fea862168e8450ef7.scope - libcontainer container 6d77720763bcc36acbf5a878bf2c8549020da29a1f2c461fea862168e8450ef7. May 8 00:40:10.666689 systemd[1]: Started cri-containerd-42572273f6fc7a542b633a62d01189409f4982a2f0cc9ed589c705db085b6372.scope - libcontainer container 42572273f6fc7a542b633a62d01189409f4982a2f0cc9ed589c705db085b6372. May 8 00:40:10.674074 systemd[1]: Started cri-containerd-88bee72183829199a08c9491f76d147728ae27e1be85415c7da3367bf762f67b.scope - libcontainer container 88bee72183829199a08c9491f76d147728ae27e1be85415c7da3367bf762f67b. May 8 00:40:10.712866 containerd[1491]: time="2025-05-08T00:40:10.712825186Z" level=info msg="StartContainer for \"6d77720763bcc36acbf5a878bf2c8549020da29a1f2c461fea862168e8450ef7\" returns successfully" May 8 00:40:10.744795 containerd[1491]: time="2025-05-08T00:40:10.743740356Z" level=info msg="StartContainer for \"42572273f6fc7a542b633a62d01189409f4982a2f0cc9ed589c705db085b6372\" returns successfully" May 8 00:40:10.808786 kubelet[2249]: I0508 00:40:10.804824 2249 kubelet_node_status.go:76] "Attempting to register node" node="172-237-145-87" May 8 00:40:10.813529 containerd[1491]: time="2025-05-08T00:40:10.813494285Z" level=info msg="StartContainer for \"88bee72183829199a08c9491f76d147728ae27e1be85415c7da3367bf762f67b\" returns successfully" May 8 00:40:11.269780 kubelet[2249]: E0508 00:40:11.268094 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:11.269780 kubelet[2249]: E0508 00:40:11.268209 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:11.271364 kubelet[2249]: E0508 00:40:11.271341 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:11.271443 kubelet[2249]: E0508 00:40:11.271421 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:11.273328 kubelet[2249]: E0508 00:40:11.273306 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:11.273407 kubelet[2249]: E0508 00:40:11.273385 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:11.848670 kubelet[2249]: I0508 00:40:11.848529 2249 kubelet_node_status.go:79] "Successfully registered node" node="172-237-145-87" May 8 00:40:11.848670 kubelet[2249]: E0508 00:40:11.848557 2249 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172-237-145-87\": node \"172-237-145-87\" not found" May 8 00:40:11.855055 kubelet[2249]: E0508 00:40:11.855022 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:11.955700 kubelet[2249]: E0508 00:40:11.955653 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:12.056591 kubelet[2249]: E0508 00:40:12.056545 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:12.157331 kubelet[2249]: E0508 00:40:12.157217 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:12.258253 kubelet[2249]: E0508 00:40:12.258217 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:12.275067 kubelet[2249]: E0508 00:40:12.274871 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:12.275067 kubelet[2249]: E0508 00:40:12.274983 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:12.275419 kubelet[2249]: E0508 00:40:12.275246 2249 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-87\" not found" node="172-237-145-87" May 8 00:40:12.275452 kubelet[2249]: E0508 00:40:12.275421 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:12.358345 kubelet[2249]: E0508 00:40:12.358300 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:12.459069 kubelet[2249]: E0508 00:40:12.458926 2249 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-145-87\" not found" May 8 00:40:12.530176 kubelet[2249]: I0508 00:40:12.530140 2249 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:12.534308 kubelet[2249]: E0508 00:40:12.534271 2249 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-145-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:12.534308 kubelet[2249]: I0508 00:40:12.534291 2249 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:12.535469 kubelet[2249]: E0508 00:40:12.535444 2249 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-145-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:12.535469 kubelet[2249]: I0508 00:40:12.535463 2249 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-145-87" May 8 00:40:12.536430 kubelet[2249]: E0508 00:40:12.536405 2249 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-145-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-145-87" May 8 00:40:13.212651 kubelet[2249]: I0508 00:40:13.212613 2249 apiserver.go:52] "Watching apiserver" May 8 00:40:13.234749 kubelet[2249]: I0508 00:40:13.234707 2249 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:40:13.728559 systemd[1]: Reload requested from client PID 2522 ('systemctl') (unit session-7.scope)... May 8 00:40:13.728575 systemd[1]: Reloading... May 8 00:40:13.846782 zram_generator::config[2573]: No configuration found. May 8 00:40:13.957285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:40:14.060959 systemd[1]: Reloading finished in 332 ms. May 8 00:40:14.077280 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 8 00:40:14.094949 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:14.117169 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:40:14.117448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:14.117488 systemd[1]: kubelet.service: Consumed 824ms CPU time, 126.9M memory peak. May 8 00:40:14.122978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:14.286424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:14.292107 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:40:14.337647 kubelet[2621]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:14.337647 kubelet[2621]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:40:14.337647 kubelet[2621]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:14.338785 kubelet[2621]: I0508 00:40:14.338146 2621 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:40:14.346812 kubelet[2621]: I0508 00:40:14.346781 2621 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:40:14.346812 kubelet[2621]: I0508 00:40:14.346803 2621 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:40:14.347007 kubelet[2621]: I0508 00:40:14.346982 2621 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:40:14.348071 kubelet[2621]: I0508 00:40:14.348047 2621 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:40:14.350203 kubelet[2621]: I0508 00:40:14.349905 2621 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:40:14.352662 kubelet[2621]: E0508 00:40:14.352603 2621 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:40:14.352742 kubelet[2621]: I0508 00:40:14.352729 2621 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:40:14.356319 kubelet[2621]: I0508 00:40:14.356305 2621 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:40:14.356600 kubelet[2621]: I0508 00:40:14.356574 2621 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:40:14.357376 kubelet[2621]: I0508 00:40:14.356642 2621 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-145-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:40:14.357376 kubelet[2621]: I0508 00:40:14.356809 2621 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:40:14.357376 kubelet[2621]: I0508 00:40:14.356817 2621 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:40:14.357376 kubelet[2621]: I0508 00:40:14.356852 2621 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:14.357376 kubelet[2621]: I0508 00:40:14.357001 2621 kubelet.go:446] "Attempting to sync node with API server" May 8 00:40:14.357561 kubelet[2621]: I0508 00:40:14.357021 2621 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:40:14.357561 kubelet[2621]: I0508 00:40:14.357037 2621 kubelet.go:352] "Adding apiserver pod source" May 8 00:40:14.357561 kubelet[2621]: I0508 00:40:14.357046 2621 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:40:14.361778 kubelet[2621]: I0508 00:40:14.361013 2621 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:40:14.361778 kubelet[2621]: I0508 00:40:14.361339 2621 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:40:14.362472 kubelet[2621]: I0508 00:40:14.362459 2621 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:40:14.362542 kubelet[2621]: I0508 00:40:14.362532 2621 server.go:1287] "Started kubelet" May 8 00:40:14.369087 kubelet[2621]: I0508 00:40:14.368821 2621 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:40:14.369860 kubelet[2621]: I0508 00:40:14.369790 2621 server.go:490] "Adding debug handlers to kubelet server" May 8 00:40:14.371523 kubelet[2621]: I0508 00:40:14.371455 2621 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:40:14.372277 kubelet[2621]: I0508 00:40:14.372222 2621 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:40:14.373339 kubelet[2621]: I0508 00:40:14.372425 2621 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:40:14.382597 kubelet[2621]: I0508 00:40:14.382564 2621 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:40:14.382723 kubelet[2621]: I0508 00:40:14.382698 2621 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:40:14.385165 kubelet[2621]: I0508 00:40:14.385127 2621 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:40:14.385258 kubelet[2621]: I0508 00:40:14.385233 2621 reconciler.go:26] "Reconciler: start to sync state" May 8 00:40:14.386123 kubelet[2621]: E0508 00:40:14.385676 2621 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:40:14.388867 kubelet[2621]: I0508 00:40:14.388699 2621 factory.go:221] Registration of the systemd container factory successfully May 8 00:40:14.388867 kubelet[2621]: I0508 00:40:14.388812 2621 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:40:14.390824 kubelet[2621]: I0508 00:40:14.390239 2621 factory.go:221] Registration of the containerd container factory successfully May 8 00:40:14.391750 kubelet[2621]: I0508 00:40:14.391727 2621 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:40:14.392920 kubelet[2621]: I0508 00:40:14.392906 2621 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:40:14.392994 kubelet[2621]: I0508 00:40:14.392984 2621 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:40:14.393060 kubelet[2621]: I0508 00:40:14.393047 2621 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:40:14.393105 kubelet[2621]: I0508 00:40:14.393096 2621 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:40:14.393197 kubelet[2621]: E0508 00:40:14.393180 2621 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:40:14.437088 kubelet[2621]: I0508 00:40:14.437066 2621 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:40:14.437405 kubelet[2621]: I0508 00:40:14.437212 2621 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:40:14.437405 kubelet[2621]: I0508 00:40:14.437231 2621 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:14.437405 kubelet[2621]: I0508 00:40:14.437348 2621 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:40:14.437405 kubelet[2621]: I0508 00:40:14.437358 2621 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:40:14.437405 kubelet[2621]: I0508 00:40:14.437374 2621 policy_none.go:49] "None policy: Start" May 8 00:40:14.437646 kubelet[2621]: I0508 00:40:14.437574 2621 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:40:14.437646 kubelet[2621]: I0508 00:40:14.437592 2621 state_mem.go:35] "Initializing new in-memory state store" May 8 00:40:14.438504 kubelet[2621]: I0508 00:40:14.437989 2621 state_mem.go:75] "Updated machine memory state" May 8 00:40:14.442475 kubelet[2621]: I0508 00:40:14.441908 2621 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:40:14.442475 kubelet[2621]: I0508 00:40:14.442062 2621 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:40:14.442475 kubelet[2621]: I0508 00:40:14.442074 2621 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:40:14.442475 kubelet[2621]: I0508 00:40:14.442245 2621 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:40:14.445140 kubelet[2621]: E0508 00:40:14.445121 2621 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:40:14.495055 kubelet[2621]: I0508 00:40:14.494641 2621 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:14.495055 kubelet[2621]: I0508 00:40:14.494673 2621 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-145-87" May 8 00:40:14.495055 kubelet[2621]: I0508 00:40:14.494970 2621 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:14.548979 kubelet[2621]: I0508 00:40:14.548955 2621 kubelet_node_status.go:76] "Attempting to register node" node="172-237-145-87" May 8 00:40:14.554692 kubelet[2621]: I0508 00:40:14.554661 2621 kubelet_node_status.go:125] "Node was previously registered" node="172-237-145-87" May 8 00:40:14.554836 kubelet[2621]: I0508 00:40:14.554805 2621 kubelet_node_status.go:79] "Successfully registered node" node="172-237-145-87" May 8 00:40:14.586330 kubelet[2621]: I0508 00:40:14.586299 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-kubeconfig\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:14.586519 kubelet[2621]: I0508 00:40:14.586328 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:14.586519 kubelet[2621]: I0508 00:40:14.586353 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95078e269dfdf08beab881a72cb6d46e-kubeconfig\") pod \"kube-scheduler-172-237-145-87\" (UID: \"95078e269dfdf08beab881a72cb6d46e\") " pod="kube-system/kube-scheduler-172-237-145-87" May 8 00:40:14.586519 kubelet[2621]: I0508 00:40:14.586370 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcb3c4733e395e01f0660265f74ffab4-k8s-certs\") pod \"kube-apiserver-172-237-145-87\" (UID: \"dcb3c4733e395e01f0660265f74ffab4\") " pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:14.586519 kubelet[2621]: I0508 00:40:14.586389 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-ca-certs\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:14.586519 kubelet[2621]: I0508 00:40:14.586403 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-flexvolume-dir\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:14.586665 kubelet[2621]: I0508 00:40:14.586420 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d32ff5db7fa4699055a47d01be5fda91-k8s-certs\") pod \"kube-controller-manager-172-237-145-87\" (UID: \"d32ff5db7fa4699055a47d01be5fda91\") " pod="kube-system/kube-controller-manager-172-237-145-87" May 8 00:40:14.586665 kubelet[2621]: I0508 00:40:14.586441 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcb3c4733e395e01f0660265f74ffab4-ca-certs\") pod \"kube-apiserver-172-237-145-87\" (UID: \"dcb3c4733e395e01f0660265f74ffab4\") " pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:14.586665 kubelet[2621]: I0508 00:40:14.586459 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcb3c4733e395e01f0660265f74ffab4-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-145-87\" (UID: \"dcb3c4733e395e01f0660265f74ffab4\") " pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:14.800043 kubelet[2621]: E0508 00:40:14.799572 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:14.800043 kubelet[2621]: E0508 00:40:14.799699 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:14.801076 kubelet[2621]: E0508 00:40:14.801027 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:15.359079 kubelet[2621]: I0508 00:40:15.358862 2621 apiserver.go:52] "Watching apiserver" May 8 00:40:15.385543 kubelet[2621]: I0508 00:40:15.385506 2621 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:40:15.421047 kubelet[2621]: E0508 00:40:15.421008 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:15.423772 kubelet[2621]: I0508 00:40:15.421684 2621 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:15.424782 kubelet[2621]: E0508 00:40:15.424147 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:15.452376 kubelet[2621]: E0508 00:40:15.452339 2621 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-145-87\" already exists" pod="kube-system/kube-apiserver-172-237-145-87" May 8 00:40:15.452553 kubelet[2621]: E0508 00:40:15.452528 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:15.511219 kubelet[2621]: I0508 00:40:15.511037 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-145-87" podStartSLOduration=1.511018027 podStartE2EDuration="1.511018027s" podCreationTimestamp="2025-05-08 00:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:15.490358392 +0000 UTC m=+1.193942895" watchObservedRunningTime="2025-05-08 00:40:15.511018027 +0000 UTC m=+1.214602520" May 8 00:40:15.529418 kubelet[2621]: I0508 00:40:15.529363 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-145-87" podStartSLOduration=1.529325007 podStartE2EDuration="1.529325007s" podCreationTimestamp="2025-05-08 00:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:15.513752078 +0000 UTC m=+1.217336581" watchObservedRunningTime="2025-05-08 00:40:15.529325007 +0000 UTC m=+1.232909500" May 8 00:40:15.544407 kubelet[2621]: I0508 00:40:15.544194 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-145-87" podStartSLOduration=1.544183647 podStartE2EDuration="1.544183647s" podCreationTimestamp="2025-05-08 00:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:15.530727211 +0000 UTC m=+1.234311714" watchObservedRunningTime="2025-05-08 00:40:15.544183647 +0000 UTC m=+1.247768140" May 8 00:40:16.422714 kubelet[2621]: E0508 00:40:16.422170 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:16.422714 kubelet[2621]: E0508 00:40:16.422600 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:17.423877 kubelet[2621]: E0508 00:40:17.423849 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:18.589276 kubelet[2621]: I0508 00:40:18.589245 2621 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:40:18.589826 containerd[1491]: time="2025-05-08T00:40:18.589574683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:40:18.592073 kubelet[2621]: I0508 00:40:18.591886 2621 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:40:18.824907 kubelet[2621]: E0508 00:40:18.824875 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:19.034320 sudo[1707]: pam_unix(sudo:session): session closed for user root May 8 00:40:19.085737 sshd[1706]: Connection closed by 139.178.89.65 port 51680 May 8 00:40:19.086457 sshd-session[1704]: pam_unix(sshd:session): session closed for user core May 8 00:40:19.090611 systemd[1]: sshd@6-172.237.145.87:22-139.178.89.65:51680.service: Deactivated successfully. May 8 00:40:19.093480 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:40:19.093687 systemd[1]: session-7.scope: Consumed 3.526s CPU time, 228.8M memory peak. May 8 00:40:19.095048 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. May 8 00:40:19.096106 systemd-logind[1468]: Removed session 7. May 8 00:40:19.498868 kubelet[2621]: W0508 00:40:19.498808 2621 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-237-145-87" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-145-87' and this object May 8 00:40:19.499308 kubelet[2621]: E0508 00:40:19.499245 2621 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-237-145-87\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-145-87' and this object" logger="UnhandledError" May 8 00:40:19.505730 systemd[1]: Created slice kubepods-besteffort-podde4a0b4e_6c98_4029_9564_e10a692f4630.slice - libcontainer container kubepods-besteffort-podde4a0b4e_6c98_4029_9564_e10a692f4630.slice. May 8 00:40:19.515042 kubelet[2621]: I0508 00:40:19.514925 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de4a0b4e-6c98-4029-9564-e10a692f4630-kube-proxy\") pod \"kube-proxy-dw5jc\" (UID: \"de4a0b4e-6c98-4029-9564-e10a692f4630\") " pod="kube-system/kube-proxy-dw5jc" May 8 00:40:19.515042 kubelet[2621]: I0508 00:40:19.514951 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de4a0b4e-6c98-4029-9564-e10a692f4630-xtables-lock\") pod \"kube-proxy-dw5jc\" (UID: \"de4a0b4e-6c98-4029-9564-e10a692f4630\") " pod="kube-system/kube-proxy-dw5jc" May 8 00:40:19.515042 kubelet[2621]: I0508 00:40:19.514966 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de4a0b4e-6c98-4029-9564-e10a692f4630-lib-modules\") pod \"kube-proxy-dw5jc\" (UID: \"de4a0b4e-6c98-4029-9564-e10a692f4630\") " pod="kube-system/kube-proxy-dw5jc" May 8 00:40:19.515042 kubelet[2621]: I0508 00:40:19.514980 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dp6\" (UniqueName: \"kubernetes.io/projected/de4a0b4e-6c98-4029-9564-e10a692f4630-kube-api-access-n9dp6\") pod \"kube-proxy-dw5jc\" (UID: \"de4a0b4e-6c98-4029-9564-e10a692f4630\") " pod="kube-system/kube-proxy-dw5jc" May 8 00:40:19.676924 systemd[1]: Created slice kubepods-besteffort-pod6e10c7d2_143f_4813_b9db_043210a93fb3.slice - libcontainer container kubepods-besteffort-pod6e10c7d2_143f_4813_b9db_043210a93fb3.slice. May 8 00:40:19.715901 kubelet[2621]: I0508 00:40:19.715868 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhks\" (UniqueName: \"kubernetes.io/projected/6e10c7d2-143f-4813-b9db-043210a93fb3-kube-api-access-8qhks\") pod \"tigera-operator-789496d6f5-qqb87\" (UID: \"6e10c7d2-143f-4813-b9db-043210a93fb3\") " pod="tigera-operator/tigera-operator-789496d6f5-qqb87" May 8 00:40:19.716271 kubelet[2621]: I0508 00:40:19.715909 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e10c7d2-143f-4813-b9db-043210a93fb3-var-lib-calico\") pod \"tigera-operator-789496d6f5-qqb87\" (UID: \"6e10c7d2-143f-4813-b9db-043210a93fb3\") " pod="tigera-operator/tigera-operator-789496d6f5-qqb87" May 8 00:40:19.981273 containerd[1491]: time="2025-05-08T00:40:19.981240465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-qqb87,Uid:6e10c7d2-143f-4813-b9db-043210a93fb3,Namespace:tigera-operator,Attempt:0,}" May 8 00:40:20.003044 containerd[1491]: time="2025-05-08T00:40:20.002897808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:20.003044 containerd[1491]: time="2025-05-08T00:40:20.002953755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:20.003044 containerd[1491]: time="2025-05-08T00:40:20.002966974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:20.003545 containerd[1491]: time="2025-05-08T00:40:20.003488829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:20.023898 systemd[1]: Started cri-containerd-8b46137a533b0b918b01a01802b906aaf399a441d55d5e372a2bbd5e3e728488.scope - libcontainer container 8b46137a533b0b918b01a01802b906aaf399a441d55d5e372a2bbd5e3e728488. May 8 00:40:20.059383 containerd[1491]: time="2025-05-08T00:40:20.059352648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-qqb87,Uid:6e10c7d2-143f-4813-b9db-043210a93fb3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8b46137a533b0b918b01a01802b906aaf399a441d55d5e372a2bbd5e3e728488\"" May 8 00:40:20.061308 containerd[1491]: time="2025-05-08T00:40:20.061271739Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:40:20.414744 kubelet[2621]: E0508 00:40:20.414424 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:20.415371 containerd[1491]: time="2025-05-08T00:40:20.415048231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dw5jc,Uid:de4a0b4e-6c98-4029-9564-e10a692f4630,Namespace:kube-system,Attempt:0,}" May 8 00:40:20.433151 containerd[1491]: time="2025-05-08T00:40:20.433063176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:20.433975 containerd[1491]: time="2025-05-08T00:40:20.433104174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:20.433975 containerd[1491]: time="2025-05-08T00:40:20.433853739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:20.433975 containerd[1491]: time="2025-05-08T00:40:20.433919003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:20.454898 systemd[1]: Started cri-containerd-6c0c63f341c20a091f39fe1f065eeac2aab9ab881e0577f114c21fd6569fbfd3.scope - libcontainer container 6c0c63f341c20a091f39fe1f065eeac2aab9ab881e0577f114c21fd6569fbfd3. May 8 00:40:20.475322 containerd[1491]: time="2025-05-08T00:40:20.475291329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dw5jc,Uid:de4a0b4e-6c98-4029-9564-e10a692f4630,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c0c63f341c20a091f39fe1f065eeac2aab9ab881e0577f114c21fd6569fbfd3\"" May 8 00:40:20.475796 kubelet[2621]: E0508 00:40:20.475751 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:20.478102 containerd[1491]: time="2025-05-08T00:40:20.478083897Z" level=info msg="CreateContainer within sandbox \"6c0c63f341c20a091f39fe1f065eeac2aab9ab881e0577f114c21fd6569fbfd3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:40:20.490592 containerd[1491]: time="2025-05-08T00:40:20.490565519Z" level=info msg="CreateContainer within sandbox \"6c0c63f341c20a091f39fe1f065eeac2aab9ab881e0577f114c21fd6569fbfd3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7dada28afd2e743442734a32291383b29c7f9274279eb8d0c3435efa99499c5\"" May 8 00:40:20.490999 containerd[1491]: time="2025-05-08T00:40:20.490963334Z" level=info msg="StartContainer for \"e7dada28afd2e743442734a32291383b29c7f9274279eb8d0c3435efa99499c5\"" May 8 00:40:20.516900 systemd[1]: Started cri-containerd-e7dada28afd2e743442734a32291383b29c7f9274279eb8d0c3435efa99499c5.scope - libcontainer container e7dada28afd2e743442734a32291383b29c7f9274279eb8d0c3435efa99499c5. May 8 00:40:20.549659 containerd[1491]: time="2025-05-08T00:40:20.549512530Z" level=info msg="StartContainer for \"e7dada28afd2e743442734a32291383b29c7f9274279eb8d0c3435efa99499c5\" returns successfully" May 8 00:40:21.431995 kubelet[2621]: E0508 00:40:21.431950 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:21.760521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273466357.mount: Deactivated successfully. May 8 00:40:22.627974 containerd[1491]: time="2025-05-08T00:40:22.627925030Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:22.628952 containerd[1491]: time="2025-05-08T00:40:22.628834639Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:40:22.629417 containerd[1491]: time="2025-05-08T00:40:22.629363832Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:22.631275 containerd[1491]: time="2025-05-08T00:40:22.631234460Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:22.632339 containerd[1491]: time="2025-05-08T00:40:22.631911902Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.570614027s" May 8 00:40:22.632339 containerd[1491]: time="2025-05-08T00:40:22.631943331Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:40:22.634072 containerd[1491]: time="2025-05-08T00:40:22.633958255Z" level=info msg="CreateContainer within sandbox \"8b46137a533b0b918b01a01802b906aaf399a441d55d5e372a2bbd5e3e728488\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:40:22.645907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361389641.mount: Deactivated successfully. May 8 00:40:22.647917 containerd[1491]: time="2025-05-08T00:40:22.647884545Z" level=info msg="CreateContainer within sandbox \"8b46137a533b0b918b01a01802b906aaf399a441d55d5e372a2bbd5e3e728488\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ffc8cc69094eaec5f30830105a1f4435e537382997327680819bcd533af53b92\"" May 8 00:40:22.648371 containerd[1491]: time="2025-05-08T00:40:22.648338604Z" level=info msg="StartContainer for \"ffc8cc69094eaec5f30830105a1f4435e537382997327680819bcd533af53b92\"" May 8 00:40:22.680897 systemd[1]: Started cri-containerd-ffc8cc69094eaec5f30830105a1f4435e537382997327680819bcd533af53b92.scope - libcontainer container ffc8cc69094eaec5f30830105a1f4435e537382997327680819bcd533af53b92. May 8 00:40:22.711449 containerd[1491]: time="2025-05-08T00:40:22.711410580Z" level=info msg="StartContainer for \"ffc8cc69094eaec5f30830105a1f4435e537382997327680819bcd533af53b92\" returns successfully" May 8 00:40:22.761703 kubelet[2621]: E0508 00:40:22.761671 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:22.777220 kubelet[2621]: I0508 00:40:22.777186 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dw5jc" podStartSLOduration=3.7771704489999998 podStartE2EDuration="3.777170449s" podCreationTimestamp="2025-05-08 00:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:21.445228062 +0000 UTC m=+7.148812555" watchObservedRunningTime="2025-05-08 00:40:22.777170449 +0000 UTC m=+8.480754942" May 8 00:40:23.436795 kubelet[2621]: E0508 00:40:23.436408 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:24.025494 kubelet[2621]: E0508 00:40:24.025449 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:24.037982 kubelet[2621]: I0508 00:40:24.037572 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-qqb87" podStartSLOduration=2.46558876 podStartE2EDuration="5.037554949s" podCreationTimestamp="2025-05-08 00:40:19 +0000 UTC" firstStartedPulling="2025-05-08 00:40:20.060680988 +0000 UTC m=+5.764265481" lastFinishedPulling="2025-05-08 00:40:22.632647177 +0000 UTC m=+8.336231670" observedRunningTime="2025-05-08 00:40:23.443421183 +0000 UTC m=+9.147005676" watchObservedRunningTime="2025-05-08 00:40:24.037554949 +0000 UTC m=+9.741139442" May 8 00:40:24.437240 kubelet[2621]: E0508 00:40:24.437135 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:24.437856 kubelet[2621]: E0508 00:40:24.437535 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:25.442427 kubelet[2621]: E0508 00:40:25.441537 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:25.650996 kubelet[2621]: I0508 00:40:25.650174 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6pj8\" (UniqueName: \"kubernetes.io/projected/e905d5a1-a2f3-4a9e-827a-5f734ee5568c-kube-api-access-t6pj8\") pod \"calico-typha-686678c7d8-x4s98\" (UID: \"e905d5a1-a2f3-4a9e-827a-5f734ee5568c\") " pod="calico-system/calico-typha-686678c7d8-x4s98" May 8 00:40:25.650996 kubelet[2621]: I0508 00:40:25.650205 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e905d5a1-a2f3-4a9e-827a-5f734ee5568c-typha-certs\") pod \"calico-typha-686678c7d8-x4s98\" (UID: \"e905d5a1-a2f3-4a9e-827a-5f734ee5568c\") " pod="calico-system/calico-typha-686678c7d8-x4s98" May 8 00:40:25.650996 kubelet[2621]: I0508 00:40:25.650222 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e905d5a1-a2f3-4a9e-827a-5f734ee5568c-tigera-ca-bundle\") pod \"calico-typha-686678c7d8-x4s98\" (UID: \"e905d5a1-a2f3-4a9e-827a-5f734ee5568c\") " pod="calico-system/calico-typha-686678c7d8-x4s98" May 8 00:40:25.650869 systemd[1]: Created slice kubepods-besteffort-pode905d5a1_a2f3_4a9e_827a_5f734ee5568c.slice - libcontainer container kubepods-besteffort-pode905d5a1_a2f3_4a9e_827a_5f734ee5568c.slice. May 8 00:40:25.668571 systemd[1]: Created slice kubepods-besteffort-pod26a5c09c_a6e6_4a70_b5da_94d9bc424fa8.slice - libcontainer container kubepods-besteffort-pod26a5c09c_a6e6_4a70_b5da_94d9bc424fa8.slice. May 8 00:40:25.751282 kubelet[2621]: I0508 00:40:25.751231 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-cni-net-dir\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751282 kubelet[2621]: I0508 00:40:25.751272 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-cni-log-dir\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751282 kubelet[2621]: I0508 00:40:25.751289 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-policysync\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751488 kubelet[2621]: I0508 00:40:25.751304 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-node-certs\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751488 kubelet[2621]: I0508 00:40:25.751318 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmb25\" (UniqueName: \"kubernetes.io/projected/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-kube-api-access-zmb25\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751488 kubelet[2621]: I0508 00:40:25.751334 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-tigera-ca-bundle\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751488 kubelet[2621]: I0508 00:40:25.751358 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-var-run-calico\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751488 kubelet[2621]: I0508 00:40:25.751370 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-var-lib-calico\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751605 kubelet[2621]: I0508 00:40:25.751384 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-flexvol-driver-host\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751605 kubelet[2621]: I0508 00:40:25.751404 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-lib-modules\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751605 kubelet[2621]: I0508 00:40:25.751416 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-xtables-lock\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.751605 kubelet[2621]: I0508 00:40:25.751432 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26a5c09c-a6e6-4a70-b5da-94d9bc424fa8-cni-bin-dir\") pod \"calico-node-qxj2d\" (UID: \"26a5c09c-a6e6-4a70-b5da-94d9bc424fa8\") " pod="calico-system/calico-node-qxj2d" May 8 00:40:25.786070 kubelet[2621]: E0508 00:40:25.785883 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:25.852007 kubelet[2621]: I0508 00:40:25.851938 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4a17ca9d-8804-4e7c-a9df-ca043ad979cf-varrun\") pod \"csi-node-driver-zddgj\" (UID: \"4a17ca9d-8804-4e7c-a9df-ca043ad979cf\") " pod="calico-system/csi-node-driver-zddgj" May 8 00:40:25.852007 kubelet[2621]: I0508 00:40:25.851985 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a17ca9d-8804-4e7c-a9df-ca043ad979cf-kubelet-dir\") pod \"csi-node-driver-zddgj\" (UID: \"4a17ca9d-8804-4e7c-a9df-ca043ad979cf\") " pod="calico-system/csi-node-driver-zddgj" May 8 00:40:25.852007 kubelet[2621]: I0508 00:40:25.852005 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j459c\" (UniqueName: \"kubernetes.io/projected/4a17ca9d-8804-4e7c-a9df-ca043ad979cf-kube-api-access-j459c\") pod \"csi-node-driver-zddgj\" (UID: \"4a17ca9d-8804-4e7c-a9df-ca043ad979cf\") " pod="calico-system/csi-node-driver-zddgj" May 8 00:40:25.852265 kubelet[2621]: I0508 00:40:25.852034 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4a17ca9d-8804-4e7c-a9df-ca043ad979cf-socket-dir\") pod \"csi-node-driver-zddgj\" (UID: \"4a17ca9d-8804-4e7c-a9df-ca043ad979cf\") " pod="calico-system/csi-node-driver-zddgj" May 8 00:40:25.852265 kubelet[2621]: I0508 00:40:25.852051 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4a17ca9d-8804-4e7c-a9df-ca043ad979cf-registration-dir\") pod \"csi-node-driver-zddgj\" (UID: \"4a17ca9d-8804-4e7c-a9df-ca043ad979cf\") " pod="calico-system/csi-node-driver-zddgj" May 8 00:40:25.857265 kubelet[2621]: E0508 00:40:25.856827 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.857265 kubelet[2621]: W0508 00:40:25.856848 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.857265 kubelet[2621]: E0508 00:40:25.856873 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.862259 kubelet[2621]: E0508 00:40:25.862232 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.862259 kubelet[2621]: W0508 00:40:25.862251 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.862259 kubelet[2621]: E0508 00:40:25.862264 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.873348 kubelet[2621]: E0508 00:40:25.873317 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.873348 kubelet[2621]: W0508 00:40:25.873332 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.873348 kubelet[2621]: E0508 00:40:25.873344 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.953062 kubelet[2621]: E0508 00:40:25.953019 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.953062 kubelet[2621]: W0508 00:40:25.953050 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.953276 kubelet[2621]: E0508 00:40:25.953068 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.953471 kubelet[2621]: E0508 00:40:25.953432 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:25.955126 containerd[1491]: time="2025-05-08T00:40:25.955092075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686678c7d8-x4s98,Uid:e905d5a1-a2f3-4a9e-827a-5f734ee5568c,Namespace:calico-system,Attempt:0,}" May 8 00:40:25.955624 kubelet[2621]: E0508 00:40:25.955458 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.955624 kubelet[2621]: W0508 00:40:25.955468 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.955624 kubelet[2621]: E0508 00:40:25.955479 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.955923 kubelet[2621]: E0508 00:40:25.955884 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.955923 kubelet[2621]: W0508 00:40:25.955900 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.956117 kubelet[2621]: E0508 00:40:25.956004 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.956331 kubelet[2621]: E0508 00:40:25.956247 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.956331 kubelet[2621]: W0508 00:40:25.956256 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.956508 kubelet[2621]: E0508 00:40:25.956340 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.956900 kubelet[2621]: E0508 00:40:25.956874 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.957384 kubelet[2621]: W0508 00:40:25.956894 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.957384 kubelet[2621]: E0508 00:40:25.957380 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.957661 kubelet[2621]: E0508 00:40:25.957624 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.957661 kubelet[2621]: W0508 00:40:25.957640 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.958018 kubelet[2621]: E0508 00:40:25.957997 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.958163 kubelet[2621]: E0508 00:40:25.958127 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.958163 kubelet[2621]: W0508 00:40:25.958142 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.958395 kubelet[2621]: E0508 00:40:25.958244 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.958532 kubelet[2621]: E0508 00:40:25.958441 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.958532 kubelet[2621]: W0508 00:40:25.958454 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.959476 kubelet[2621]: E0508 00:40:25.959451 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.959691 kubelet[2621]: E0508 00:40:25.959669 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.959977 kubelet[2621]: W0508 00:40:25.959695 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.959977 kubelet[2621]: E0508 00:40:25.959868 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.960088 kubelet[2621]: E0508 00:40:25.960044 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.960088 kubelet[2621]: W0508 00:40:25.960053 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.960335 kubelet[2621]: E0508 00:40:25.960135 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.960882 kubelet[2621]: E0508 00:40:25.960864 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.960882 kubelet[2621]: W0508 00:40:25.960880 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.961181 kubelet[2621]: E0508 00:40:25.960941 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.961325 kubelet[2621]: E0508 00:40:25.961292 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.961325 kubelet[2621]: W0508 00:40:25.961306 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.961488 kubelet[2621]: E0508 00:40:25.961396 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.961837 kubelet[2621]: E0508 00:40:25.961808 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.961837 kubelet[2621]: W0508 00:40:25.961822 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.962051 kubelet[2621]: E0508 00:40:25.962031 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.962964 kubelet[2621]: E0508 00:40:25.962890 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.962964 kubelet[2621]: W0508 00:40:25.962916 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.963245 kubelet[2621]: E0508 00:40:25.963116 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.963410 kubelet[2621]: E0508 00:40:25.963361 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.963567 kubelet[2621]: W0508 00:40:25.963493 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.963696 kubelet[2621]: E0508 00:40:25.963650 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.964931 kubelet[2621]: E0508 00:40:25.964827 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.964931 kubelet[2621]: W0508 00:40:25.964840 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.965936 kubelet[2621]: E0508 00:40:25.965814 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.966875 kubelet[2621]: E0508 00:40:25.966862 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.970317 kubelet[2621]: W0508 00:40:25.966976 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.970476 kubelet[2621]: E0508 00:40:25.970453 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.971993 kubelet[2621]: E0508 00:40:25.971547 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.971993 kubelet[2621]: W0508 00:40:25.971580 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.972292 kubelet[2621]: E0508 00:40:25.972276 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.972652 kubelet[2621]: E0508 00:40:25.972628 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.972743 kubelet[2621]: W0508 00:40:25.972731 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.973049 kubelet[2621]: E0508 00:40:25.972911 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.973249 kubelet[2621]: E0508 00:40:25.973221 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.973300 kubelet[2621]: W0508 00:40:25.973289 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.973573 kubelet[2621]: E0508 00:40:25.973544 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:25.974865 kubelet[2621]: E0508 00:40:25.974851 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.975016 kubelet[2621]: W0508 00:40:25.975004 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.976021 containerd[1491]: time="2025-05-08T00:40:25.975678683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qxj2d,Uid:26a5c09c-a6e6-4a70-b5da-94d9bc424fa8,Namespace:calico-system,Attempt:0,}" May 8 00:40:25.979420 kubelet[2621]: E0508 00:40:25.978624 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.979420 kubelet[2621]: W0508 00:40:25.978637 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.979420 kubelet[2621]: E0508 00:40:25.978648 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.979420 kubelet[2621]: E0508 00:40:25.978670 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.979420 kubelet[2621]: E0508 00:40:25.978929 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.979420 kubelet[2621]: E0508 00:40:25.979101 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.979420 kubelet[2621]: W0508 00:40:25.979115 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.979420 kubelet[2621]: E0508 00:40:25.979224 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.979857 kubelet[2621]: E0508 00:40:25.979832 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.979857 kubelet[2621]: W0508 00:40:25.979850 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.979923 kubelet[2621]: E0508 00:40:25.979902 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.980563 kubelet[2621]: E0508 00:40:25.980440 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.980563 kubelet[2621]: W0508 00:40:25.980455 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.980563 kubelet[2621]: E0508 00:40:25.980464 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.992823 kubelet[2621]: E0508 00:40:25.992798 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.992823 kubelet[2621]: W0508 00:40:25.992817 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.992963 kubelet[2621]: E0508 00:40:25.992829 2621 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.009877 containerd[1491]: time="2025-05-08T00:40:26.009709590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:26.009877 containerd[1491]: time="2025-05-08T00:40:26.009837001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:26.011569 containerd[1491]: time="2025-05-08T00:40:26.011334136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:26.014150 containerd[1491]: time="2025-05-08T00:40:26.012718817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:26.016204 containerd[1491]: time="2025-05-08T00:40:26.015979125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:26.016204 containerd[1491]: time="2025-05-08T00:40:26.016026828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:26.016204 containerd[1491]: time="2025-05-08T00:40:26.016038243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:26.016204 containerd[1491]: time="2025-05-08T00:40:26.016122193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:26.039915 systemd[1]: Started cri-containerd-2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e.scope - libcontainer container 2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e. May 8 00:40:26.044320 systemd[1]: Started cri-containerd-d79b482f8981f0d29da1d89c59b81bf3d6ed74a6708fb5694e0d38a94c3ae3a8.scope - libcontainer container d79b482f8981f0d29da1d89c59b81bf3d6ed74a6708fb5694e0d38a94c3ae3a8. May 8 00:40:26.089577 containerd[1491]: time="2025-05-08T00:40:26.089519000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qxj2d,Uid:26a5c09c-a6e6-4a70-b5da-94d9bc424fa8,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e\"" May 8 00:40:26.090752 kubelet[2621]: E0508 00:40:26.090354 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:26.091372 containerd[1491]: time="2025-05-08T00:40:26.091340370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:40:26.108334 containerd[1491]: time="2025-05-08T00:40:26.108297640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686678c7d8-x4s98,Uid:e905d5a1-a2f3-4a9e-827a-5f734ee5568c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d79b482f8981f0d29da1d89c59b81bf3d6ed74a6708fb5694e0d38a94c3ae3a8\"" May 8 00:40:26.109341 kubelet[2621]: E0508 00:40:26.109267 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:26.736070 containerd[1491]: time="2025-05-08T00:40:26.735352329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:26.736070 containerd[1491]: time="2025-05-08T00:40:26.736025920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:40:26.736580 containerd[1491]: time="2025-05-08T00:40:26.736536474Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:26.738046 containerd[1491]: time="2025-05-08T00:40:26.738008918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:26.738732 containerd[1491]: time="2025-05-08T00:40:26.738677908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 647.240291ms" May 8 00:40:26.738848 containerd[1491]: time="2025-05-08T00:40:26.738703690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:40:26.745340 containerd[1491]: time="2025-05-08T00:40:26.745203274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:40:26.746518 containerd[1491]: time="2025-05-08T00:40:26.746473521Z" level=info msg="CreateContainer within sandbox \"2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:40:26.767924 containerd[1491]: time="2025-05-08T00:40:26.767894082Z" level=info msg="CreateContainer within sandbox \"2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e\"" May 8 00:40:26.768863 containerd[1491]: time="2025-05-08T00:40:26.768412059Z" level=info msg="StartContainer for \"2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e\"" May 8 00:40:26.799914 systemd[1]: Started cri-containerd-2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e.scope - libcontainer container 2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e. May 8 00:40:26.831795 containerd[1491]: time="2025-05-08T00:40:26.831239059Z" level=info msg="StartContainer for \"2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e\" returns successfully" May 8 00:40:26.855846 systemd[1]: cri-containerd-2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e.scope: Deactivated successfully. May 8 00:40:26.888156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e-rootfs.mount: Deactivated successfully. May 8 00:40:26.921415 containerd[1491]: time="2025-05-08T00:40:26.921357893Z" level=info msg="shim disconnected" id=2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e namespace=k8s.io May 8 00:40:26.921415 containerd[1491]: time="2025-05-08T00:40:26.921413830Z" level=warning msg="cleaning up after shim disconnected" id=2834612c96a2340b8bf546c273d3add3c44f28eaa62e4e12871b627abec2f95e namespace=k8s.io May 8 00:40:26.921415 containerd[1491]: time="2025-05-08T00:40:26.921423325Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:27.393982 kubelet[2621]: E0508 00:40:27.393629 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:27.453219 kubelet[2621]: E0508 00:40:27.452849 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:27.858732 containerd[1491]: time="2025-05-08T00:40:27.858684753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:27.859985 containerd[1491]: time="2025-05-08T00:40:27.859682925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:40:27.861104 containerd[1491]: time="2025-05-08T00:40:27.861074136Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:27.863188 containerd[1491]: time="2025-05-08T00:40:27.863167184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:27.863882 containerd[1491]: time="2025-05-08T00:40:27.863862359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.1186286s" May 8 00:40:27.864020 containerd[1491]: time="2025-05-08T00:40:27.863942677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:40:27.864922 containerd[1491]: time="2025-05-08T00:40:27.864904592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:40:27.882629 containerd[1491]: time="2025-05-08T00:40:27.881823070Z" level=info msg="CreateContainer within sandbox \"d79b482f8981f0d29da1d89c59b81bf3d6ed74a6708fb5694e0d38a94c3ae3a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:40:27.898282 containerd[1491]: time="2025-05-08T00:40:27.898225086Z" level=info msg="CreateContainer within sandbox \"d79b482f8981f0d29da1d89c59b81bf3d6ed74a6708fb5694e0d38a94c3ae3a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"42b045345a0e4a2360b8292128d29de10767fecf559d05fc99e5dcf6b5222d27\"" May 8 00:40:27.899006 containerd[1491]: time="2025-05-08T00:40:27.898975416Z" level=info msg="StartContainer for \"42b045345a0e4a2360b8292128d29de10767fecf559d05fc99e5dcf6b5222d27\"" May 8 00:40:27.932896 systemd[1]: Started cri-containerd-42b045345a0e4a2360b8292128d29de10767fecf559d05fc99e5dcf6b5222d27.scope - libcontainer container 42b045345a0e4a2360b8292128d29de10767fecf559d05fc99e5dcf6b5222d27. May 8 00:40:27.974896 containerd[1491]: time="2025-05-08T00:40:27.974107552Z" level=info msg="StartContainer for \"42b045345a0e4a2360b8292128d29de10767fecf559d05fc99e5dcf6b5222d27\" returns successfully" May 8 00:40:28.452537 kubelet[2621]: E0508 00:40:28.452499 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:28.832976 kubelet[2621]: E0508 00:40:28.831866 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:28.833095 update_engine[1469]: I20250508 00:40:28.832798 1469 update_attempter.cc:509] Updating boot flags... May 8 00:40:28.847852 kubelet[2621]: I0508 00:40:28.847582 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-686678c7d8-x4s98" podStartSLOduration=2.092788548 podStartE2EDuration="3.847565089s" podCreationTimestamp="2025-05-08 00:40:25 +0000 UTC" firstStartedPulling="2025-05-08 00:40:26.109998933 +0000 UTC m=+11.813583436" lastFinishedPulling="2025-05-08 00:40:27.864775484 +0000 UTC m=+13.568359977" observedRunningTime="2025-05-08 00:40:28.460916041 +0000 UTC m=+14.164500534" watchObservedRunningTime="2025-05-08 00:40:28.847565089 +0000 UTC m=+14.551149592" May 8 00:40:28.933944 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (3239) May 8 00:40:29.048923 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (3241) May 8 00:40:29.164822 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (3241) May 8 00:40:29.394012 kubelet[2621]: E0508 00:40:29.393958 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:29.453819 kubelet[2621]: I0508 00:40:29.453735 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:29.454203 kubelet[2621]: E0508 00:40:29.454072 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:30.537804 containerd[1491]: time="2025-05-08T00:40:30.537745019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:30.538747 containerd[1491]: time="2025-05-08T00:40:30.538566268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:40:30.539500 containerd[1491]: time="2025-05-08T00:40:30.539449102Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:30.541271 containerd[1491]: time="2025-05-08T00:40:30.541238999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:30.542536 containerd[1491]: time="2025-05-08T00:40:30.542239477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 2.67701477s" May 8 00:40:30.542536 containerd[1491]: time="2025-05-08T00:40:30.542471647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:40:30.544693 containerd[1491]: time="2025-05-08T00:40:30.544671923Z" level=info msg="CreateContainer within sandbox \"2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:40:30.561071 containerd[1491]: time="2025-05-08T00:40:30.561028035Z" level=info msg="CreateContainer within sandbox \"2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054\"" May 8 00:40:30.567112 containerd[1491]: time="2025-05-08T00:40:30.566155999Z" level=info msg="StartContainer for \"2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054\"" May 8 00:40:30.602000 systemd[1]: Started cri-containerd-2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054.scope - libcontainer container 2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054. May 8 00:40:30.642420 containerd[1491]: time="2025-05-08T00:40:30.642383077Z" level=info msg="StartContainer for \"2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054\" returns successfully" May 8 00:40:31.100136 containerd[1491]: time="2025-05-08T00:40:31.099983231Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:40:31.104395 systemd[1]: cri-containerd-2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054.scope: Deactivated successfully. May 8 00:40:31.104912 systemd[1]: cri-containerd-2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054.scope: Consumed 503ms CPU time, 174.6M memory peak, 154M written to disk. May 8 00:40:31.111891 kubelet[2621]: I0508 00:40:31.111747 2621 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:40:31.140883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054-rootfs.mount: Deactivated successfully. May 8 00:40:31.154783 systemd[1]: Created slice kubepods-burstable-pod2f1333b6_854b_41fd_b0fb_3eb10c2461e2.slice - libcontainer container kubepods-burstable-pod2f1333b6_854b_41fd_b0fb_3eb10c2461e2.slice. May 8 00:40:31.181491 systemd[1]: Created slice kubepods-burstable-pod100ec8f8_03d9_440e_8703_cce80bc589fd.slice - libcontainer container kubepods-burstable-pod100ec8f8_03d9_440e_8703_cce80bc589fd.slice. May 8 00:40:31.197068 kubelet[2621]: I0508 00:40:31.197044 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5b22028e-782d-431d-8b78-71ebd2d0dd5e-calico-apiserver-certs\") pod \"calico-apiserver-8467bd5dfd-82nqn\" (UID: \"5b22028e-782d-431d-8b78-71ebd2d0dd5e\") " pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:31.202095 kubelet[2621]: I0508 00:40:31.197667 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f1333b6-854b-41fd-b0fb-3eb10c2461e2-config-volume\") pod \"coredns-668d6bf9bc-brpjg\" (UID: \"2f1333b6-854b-41fd-b0fb-3eb10c2461e2\") " pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:31.202095 kubelet[2621]: I0508 00:40:31.197733 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw88d\" (UniqueName: \"kubernetes.io/projected/f9d2fedb-163e-44bf-9fb8-248856ed9ef7-kube-api-access-gw88d\") pod \"calico-apiserver-8467bd5dfd-5dthz\" (UID: \"f9d2fedb-163e-44bf-9fb8-248856ed9ef7\") " pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:31.198250 systemd[1]: Created slice kubepods-besteffort-pode722fa22_1636_4de9_a89f_f2a8e31c4ced.slice - libcontainer container kubepods-besteffort-pode722fa22_1636_4de9_a89f_f2a8e31c4ced.slice. May 8 00:40:31.202799 kubelet[2621]: I0508 00:40:31.202774 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e722fa22-1636-4de9-a89f-f2a8e31c4ced-tigera-ca-bundle\") pod \"calico-kube-controllers-64dc6f8966-xqfsx\" (UID: \"e722fa22-1636-4de9-a89f-f2a8e31c4ced\") " pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:31.202903 kubelet[2621]: I0508 00:40:31.202886 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/100ec8f8-03d9-440e-8703-cce80bc589fd-config-volume\") pod \"coredns-668d6bf9bc-k4bg9\" (UID: \"100ec8f8-03d9-440e-8703-cce80bc589fd\") " pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:31.202999 kubelet[2621]: I0508 00:40:31.202981 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gjg\" (UniqueName: \"kubernetes.io/projected/5b22028e-782d-431d-8b78-71ebd2d0dd5e-kube-api-access-j2gjg\") pod \"calico-apiserver-8467bd5dfd-82nqn\" (UID: \"5b22028e-782d-431d-8b78-71ebd2d0dd5e\") " pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:31.204856 kubelet[2621]: I0508 00:40:31.203154 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jp26\" (UniqueName: \"kubernetes.io/projected/100ec8f8-03d9-440e-8703-cce80bc589fd-kube-api-access-8jp26\") pod \"coredns-668d6bf9bc-k4bg9\" (UID: \"100ec8f8-03d9-440e-8703-cce80bc589fd\") " pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:31.204945 kubelet[2621]: I0508 00:40:31.204928 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6ql\" (UniqueName: \"kubernetes.io/projected/e722fa22-1636-4de9-a89f-f2a8e31c4ced-kube-api-access-hm6ql\") pod \"calico-kube-controllers-64dc6f8966-xqfsx\" (UID: \"e722fa22-1636-4de9-a89f-f2a8e31c4ced\") " pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:31.205010 kubelet[2621]: I0508 00:40:31.204998 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f9d2fedb-163e-44bf-9fb8-248856ed9ef7-calico-apiserver-certs\") pod \"calico-apiserver-8467bd5dfd-5dthz\" (UID: \"f9d2fedb-163e-44bf-9fb8-248856ed9ef7\") " pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:31.205077 kubelet[2621]: I0508 00:40:31.205063 2621 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhmmf\" (UniqueName: \"kubernetes.io/projected/2f1333b6-854b-41fd-b0fb-3eb10c2461e2-kube-api-access-vhmmf\") pod \"coredns-668d6bf9bc-brpjg\" (UID: \"2f1333b6-854b-41fd-b0fb-3eb10c2461e2\") " pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:31.223045 systemd[1]: Created slice kubepods-besteffort-pod5b22028e_782d_431d_8b78_71ebd2d0dd5e.slice - libcontainer container kubepods-besteffort-pod5b22028e_782d_431d_8b78_71ebd2d0dd5e.slice. May 8 00:40:31.235664 systemd[1]: Created slice kubepods-besteffort-podf9d2fedb_163e_44bf_9fb8_248856ed9ef7.slice - libcontainer container kubepods-besteffort-podf9d2fedb_163e_44bf_9fb8_248856ed9ef7.slice. May 8 00:40:31.245352 containerd[1491]: time="2025-05-08T00:40:31.245295283Z" level=info msg="shim disconnected" id=2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054 namespace=k8s.io May 8 00:40:31.245352 containerd[1491]: time="2025-05-08T00:40:31.245352064Z" level=warning msg="cleaning up after shim disconnected" id=2067cfe67ed0c6ad79befd6653ff36186083e981487899a45fa3124996145054 namespace=k8s.io May 8 00:40:31.245536 containerd[1491]: time="2025-05-08T00:40:31.245360977Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:31.265808 containerd[1491]: time="2025-05-08T00:40:31.265168999Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:40:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:40:31.399288 systemd[1]: Created slice kubepods-besteffort-pod4a17ca9d_8804_4e7c_a9df_ca043ad979cf.slice - libcontainer container kubepods-besteffort-pod4a17ca9d_8804_4e7c_a9df_ca043ad979cf.slice. May 8 00:40:31.401896 containerd[1491]: time="2025-05-08T00:40:31.401865463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:0,}" May 8 00:40:31.463826 kubelet[2621]: E0508 00:40:31.463693 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:31.466481 containerd[1491]: time="2025-05-08T00:40:31.466270251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:40:31.471249 containerd[1491]: time="2025-05-08T00:40:31.471204006Z" level=error msg="Failed to destroy network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.471691 containerd[1491]: time="2025-05-08T00:40:31.471608306Z" level=error msg="encountered an error cleaning up failed sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.471691 containerd[1491]: time="2025-05-08T00:40:31.471660865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.472086 kubelet[2621]: E0508 00:40:31.472055 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.472235 kubelet[2621]: E0508 00:40:31.472212 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:31.472277 kubelet[2621]: E0508 00:40:31.472239 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:31.472825 kubelet[2621]: E0508 00:40:31.472794 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:31.478616 kubelet[2621]: E0508 00:40:31.478594 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:31.479158 containerd[1491]: time="2025-05-08T00:40:31.478948183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:0,}" May 8 00:40:31.491703 kubelet[2621]: E0508 00:40:31.491440 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:31.494740 containerd[1491]: time="2025-05-08T00:40:31.494586461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:0,}" May 8 00:40:31.514603 containerd[1491]: time="2025-05-08T00:40:31.514580371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:0,}" May 8 00:40:31.537645 containerd[1491]: time="2025-05-08T00:40:31.537622739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:31.542213 containerd[1491]: time="2025-05-08T00:40:31.542168402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:31.627789 containerd[1491]: time="2025-05-08T00:40:31.627700759Z" level=error msg="Failed to destroy network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.629253 containerd[1491]: time="2025-05-08T00:40:31.629202215Z" level=error msg="encountered an error cleaning up failed sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.629336 containerd[1491]: time="2025-05-08T00:40:31.629269490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.633199 kubelet[2621]: E0508 00:40:31.629896 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.633199 kubelet[2621]: E0508 00:40:31.629943 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:31.633199 kubelet[2621]: E0508 00:40:31.629965 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:31.633296 kubelet[2621]: E0508 00:40:31.629998 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-brpjg" podUID="2f1333b6-854b-41fd-b0fb-3eb10c2461e2" May 8 00:40:31.635223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9-shm.mount: Deactivated successfully. May 8 00:40:31.645225 containerd[1491]: time="2025-05-08T00:40:31.645191323Z" level=error msg="Failed to destroy network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.648860 containerd[1491]: time="2025-05-08T00:40:31.645746328Z" level=error msg="encountered an error cleaning up failed sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.648860 containerd[1491]: time="2025-05-08T00:40:31.645946132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.648553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4-shm.mount: Deactivated successfully. May 8 00:40:31.649321 kubelet[2621]: E0508 00:40:31.646140 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.649321 kubelet[2621]: E0508 00:40:31.646177 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:31.649321 kubelet[2621]: E0508 00:40:31.646193 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:31.649444 kubelet[2621]: E0508 00:40:31.646228 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k4bg9" podUID="100ec8f8-03d9-440e-8703-cce80bc589fd" May 8 00:40:31.699742 containerd[1491]: time="2025-05-08T00:40:31.699289295Z" level=error msg="Failed to destroy network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.699835 containerd[1491]: time="2025-05-08T00:40:31.699745724Z" level=error msg="encountered an error cleaning up failed sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.699835 containerd[1491]: time="2025-05-08T00:40:31.699814409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.701023 kubelet[2621]: E0508 00:40:31.699976 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.701023 kubelet[2621]: E0508 00:40:31.700025 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:31.701023 kubelet[2621]: E0508 00:40:31.700045 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:31.701482 containerd[1491]: time="2025-05-08T00:40:31.700695266Z" level=error msg="Failed to destroy network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.701482 containerd[1491]: time="2025-05-08T00:40:31.701118562Z" level=error msg="encountered an error cleaning up failed sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.701482 containerd[1491]: time="2025-05-08T00:40:31.701156346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.701927 kubelet[2621]: E0508 00:40:31.700078 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" podUID="f9d2fedb-163e-44bf-9fb8-248856ed9ef7" May 8 00:40:31.702104 kubelet[2621]: E0508 00:40:31.702018 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.702104 kubelet[2621]: E0508 00:40:31.702052 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:31.702104 kubelet[2621]: E0508 00:40:31.702068 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:31.702800 kubelet[2621]: E0508 00:40:31.702115 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" podUID="e722fa22-1636-4de9-a89f-f2a8e31c4ced" May 8 00:40:31.709494 containerd[1491]: time="2025-05-08T00:40:31.709464011Z" level=error msg="Failed to destroy network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.710116 containerd[1491]: time="2025-05-08T00:40:31.710094004Z" level=error msg="encountered an error cleaning up failed sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.710266 containerd[1491]: time="2025-05-08T00:40:31.710247332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.710585 kubelet[2621]: E0508 00:40:31.710560 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:31.710668 kubelet[2621]: E0508 00:40:31.710594 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:31.710668 kubelet[2621]: E0508 00:40:31.710613 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:31.710668 kubelet[2621]: E0508 00:40:31.710644 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" podUID="5b22028e-782d-431d-8b78-71ebd2d0dd5e" May 8 00:40:32.466642 kubelet[2621]: I0508 00:40:32.466610 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1" May 8 00:40:32.467804 containerd[1491]: time="2025-05-08T00:40:32.467771777Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:40:32.468006 containerd[1491]: time="2025-05-08T00:40:32.467986692Z" level=info msg="Ensure that sandbox e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1 in task-service has been cleanup successfully" May 8 00:40:32.469337 containerd[1491]: time="2025-05-08T00:40:32.469276668Z" level=info msg="TearDown network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" successfully" May 8 00:40:32.469337 containerd[1491]: time="2025-05-08T00:40:32.469299716Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" returns successfully" May 8 00:40:32.469475 kubelet[2621]: I0508 00:40:32.469360 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b" May 8 00:40:32.469997 containerd[1491]: time="2025-05-08T00:40:32.469789758Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:40:32.470144 containerd[1491]: time="2025-05-08T00:40:32.470100217Z" level=info msg="Ensure that sandbox ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b in task-service has been cleanup successfully" May 8 00:40:32.470566 containerd[1491]: time="2025-05-08T00:40:32.470499908Z" level=info msg="TearDown network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" successfully" May 8 00:40:32.470566 containerd[1491]: time="2025-05-08T00:40:32.470514423Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" returns successfully" May 8 00:40:32.471308 containerd[1491]: time="2025-05-08T00:40:32.471012309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:1,}" May 8 00:40:32.471509 kubelet[2621]: I0508 00:40:32.471484 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9" May 8 00:40:32.471645 containerd[1491]: time="2025-05-08T00:40:32.471494249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:32.472084 containerd[1491]: time="2025-05-08T00:40:32.472013892Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:40:32.472388 containerd[1491]: time="2025-05-08T00:40:32.472248805Z" level=info msg="Ensure that sandbox 747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9 in task-service has been cleanup successfully" May 8 00:40:32.473155 containerd[1491]: time="2025-05-08T00:40:32.472861350Z" level=info msg="TearDown network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" successfully" May 8 00:40:32.473224 containerd[1491]: time="2025-05-08T00:40:32.473209884Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" returns successfully" May 8 00:40:32.475667 kubelet[2621]: I0508 00:40:32.475634 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9" May 8 00:40:32.479109 containerd[1491]: time="2025-05-08T00:40:32.478841078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:32.480560 containerd[1491]: time="2025-05-08T00:40:32.480352811Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:40:32.480560 containerd[1491]: time="2025-05-08T00:40:32.480474084Z" level=info msg="Ensure that sandbox f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9 in task-service has been cleanup successfully" May 8 00:40:32.481098 containerd[1491]: time="2025-05-08T00:40:32.481080588Z" level=info msg="TearDown network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" successfully" May 8 00:40:32.482487 containerd[1491]: time="2025-05-08T00:40:32.482368612Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" returns successfully" May 8 00:40:32.485477 kubelet[2621]: E0508 00:40:32.485443 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:32.487367 containerd[1491]: time="2025-05-08T00:40:32.487313085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:1,}" May 8 00:40:32.489706 kubelet[2621]: I0508 00:40:32.489252 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5" May 8 00:40:32.490297 containerd[1491]: time="2025-05-08T00:40:32.490265505Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:40:32.490687 containerd[1491]: time="2025-05-08T00:40:32.490660204Z" level=info msg="Ensure that sandbox 465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5 in task-service has been cleanup successfully" May 8 00:40:32.495618 containerd[1491]: time="2025-05-08T00:40:32.494748045Z" level=info msg="TearDown network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" successfully" May 8 00:40:32.495618 containerd[1491]: time="2025-05-08T00:40:32.495612250Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" returns successfully" May 8 00:40:32.498621 kubelet[2621]: I0508 00:40:32.498584 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4" May 8 00:40:32.501113 containerd[1491]: time="2025-05-08T00:40:32.500507835Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:40:32.501113 containerd[1491]: time="2025-05-08T00:40:32.500659158Z" level=info msg="Ensure that sandbox 5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4 in task-service has been cleanup successfully" May 8 00:40:32.503244 containerd[1491]: time="2025-05-08T00:40:32.503223253Z" level=info msg="TearDown network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" successfully" May 8 00:40:32.503358 containerd[1491]: time="2025-05-08T00:40:32.503342815Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" returns successfully" May 8 00:40:32.505316 containerd[1491]: time="2025-05-08T00:40:32.505285510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:1,}" May 8 00:40:32.505722 kubelet[2621]: E0508 00:40:32.505609 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:32.506351 containerd[1491]: time="2025-05-08T00:40:32.506247058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:1,}" May 8 00:40:32.566028 systemd[1]: run-netns-cni\x2d02822eef\x2de1c8\x2d6266\x2d4109\x2d281d51c949fb.mount: Deactivated successfully. May 8 00:40:32.566294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b-shm.mount: Deactivated successfully. May 8 00:40:32.566381 systemd[1]: run-netns-cni\x2da518ee61\x2d1afc\x2df6c9\x2d5434\x2d508ed8ab2d7a.mount: Deactivated successfully. May 8 00:40:32.566451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9-shm.mount: Deactivated successfully. May 8 00:40:32.566522 systemd[1]: run-netns-cni\x2dc334b7d7\x2d5087\x2d9469\x2d99b5\x2d8c8c3c97f746.mount: Deactivated successfully. May 8 00:40:32.566594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5-shm.mount: Deactivated successfully. May 8 00:40:32.566667 systemd[1]: run-netns-cni\x2d80eac9b5\x2d3b71\x2d34cd\x2d8c0a\x2d7680bf416fa3.mount: Deactivated successfully. May 8 00:40:32.566738 systemd[1]: run-netns-cni\x2d41179d46\x2d755a\x2defd7\x2d6857\x2dd794df4dfdfd.mount: Deactivated successfully. May 8 00:40:32.566824 systemd[1]: run-netns-cni\x2d378fd646\x2d1634\x2d17e7\x2d4f76\x2d446b47324a19.mount: Deactivated successfully. May 8 00:40:32.697549 containerd[1491]: time="2025-05-08T00:40:32.697276161Z" level=error msg="Failed to destroy network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.699234 containerd[1491]: time="2025-05-08T00:40:32.699050636Z" level=error msg="encountered an error cleaning up failed sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.699234 containerd[1491]: time="2025-05-08T00:40:32.699112907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.699743 kubelet[2621]: E0508 00:40:32.699676 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.700377 kubelet[2621]: E0508 00:40:32.699740 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:32.700377 kubelet[2621]: E0508 00:40:32.699790 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:32.700377 kubelet[2621]: E0508 00:40:32.699844 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" podUID="f9d2fedb-163e-44bf-9fb8-248856ed9ef7" May 8 00:40:32.730979 containerd[1491]: time="2025-05-08T00:40:32.730936124Z" level=error msg="Failed to destroy network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.732830 containerd[1491]: time="2025-05-08T00:40:32.732369110Z" level=error msg="encountered an error cleaning up failed sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.732830 containerd[1491]: time="2025-05-08T00:40:32.732430661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.732923 kubelet[2621]: E0508 00:40:32.732625 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.732923 kubelet[2621]: E0508 00:40:32.732672 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:32.732923 kubelet[2621]: E0508 00:40:32.732693 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:32.733026 kubelet[2621]: E0508 00:40:32.732729 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:32.743193 containerd[1491]: time="2025-05-08T00:40:32.742691558Z" level=error msg="Failed to destroy network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.743193 containerd[1491]: time="2025-05-08T00:40:32.743082736Z" level=error msg="encountered an error cleaning up failed sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.743193 containerd[1491]: time="2025-05-08T00:40:32.743130802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.743409 kubelet[2621]: E0508 00:40:32.743355 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.743493 kubelet[2621]: E0508 00:40:32.743413 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:32.743493 kubelet[2621]: E0508 00:40:32.743434 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:32.743493 kubelet[2621]: E0508 00:40:32.743480 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-brpjg" podUID="2f1333b6-854b-41fd-b0fb-3eb10c2461e2" May 8 00:40:32.746403 containerd[1491]: time="2025-05-08T00:40:32.745853933Z" level=error msg="Failed to destroy network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.749723 containerd[1491]: time="2025-05-08T00:40:32.749689535Z" level=error msg="encountered an error cleaning up failed sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.749942 containerd[1491]: time="2025-05-08T00:40:32.749910683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.750469 kubelet[2621]: E0508 00:40:32.750404 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.750469 kubelet[2621]: E0508 00:40:32.750444 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:32.750469 kubelet[2621]: E0508 00:40:32.750462 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:32.750889 kubelet[2621]: E0508 00:40:32.750497 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" podUID="5b22028e-782d-431d-8b78-71ebd2d0dd5e" May 8 00:40:32.768779 containerd[1491]: time="2025-05-08T00:40:32.768367588Z" level=error msg="Failed to destroy network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.768779 containerd[1491]: time="2025-05-08T00:40:32.768667494Z" level=error msg="encountered an error cleaning up failed sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.768779 containerd[1491]: time="2025-05-08T00:40:32.768712019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.769181 kubelet[2621]: E0508 00:40:32.768866 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.769181 kubelet[2621]: E0508 00:40:32.768902 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:32.769181 kubelet[2621]: E0508 00:40:32.768922 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:32.770620 kubelet[2621]: E0508 00:40:32.769145 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" podUID="e722fa22-1636-4de9-a89f-f2a8e31c4ced" May 8 00:40:32.786751 containerd[1491]: time="2025-05-08T00:40:32.785038194Z" level=error msg="Failed to destroy network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.786751 containerd[1491]: time="2025-05-08T00:40:32.785360748Z" level=error msg="encountered an error cleaning up failed sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.786751 containerd[1491]: time="2025-05-08T00:40:32.785408515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.787041 kubelet[2621]: E0508 00:40:32.785542 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:32.787041 kubelet[2621]: E0508 00:40:32.785578 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:32.787041 kubelet[2621]: E0508 00:40:32.785595 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:32.787580 kubelet[2621]: E0508 00:40:32.785624 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k4bg9" podUID="100ec8f8-03d9-440e-8703-cce80bc589fd" May 8 00:40:33.503188 kubelet[2621]: I0508 00:40:33.502537 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b" May 8 00:40:33.503562 containerd[1491]: time="2025-05-08T00:40:33.503227632Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" May 8 00:40:33.503562 containerd[1491]: time="2025-05-08T00:40:33.503408622Z" level=info msg="Ensure that sandbox fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b in task-service has been cleanup successfully" May 8 00:40:33.504857 containerd[1491]: time="2025-05-08T00:40:33.504806252Z" level=info msg="TearDown network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" successfully" May 8 00:40:33.504857 containerd[1491]: time="2025-05-08T00:40:33.504828400Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" returns successfully" May 8 00:40:33.505930 containerd[1491]: time="2025-05-08T00:40:33.505906351Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:40:33.506024 containerd[1491]: time="2025-05-08T00:40:33.505980286Z" level=info msg="TearDown network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" successfully" May 8 00:40:33.506024 containerd[1491]: time="2025-05-08T00:40:33.505997552Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" returns successfully" May 8 00:40:33.506616 kubelet[2621]: I0508 00:40:33.506409 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb" May 8 00:40:33.506655 containerd[1491]: time="2025-05-08T00:40:33.506510095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:2,}" May 8 00:40:33.507051 containerd[1491]: time="2025-05-08T00:40:33.507025478Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" May 8 00:40:33.507281 containerd[1491]: time="2025-05-08T00:40:33.507256085Z" level=info msg="Ensure that sandbox 0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb in task-service has been cleanup successfully" May 8 00:40:33.507888 containerd[1491]: time="2025-05-08T00:40:33.507827257Z" level=info msg="TearDown network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" successfully" May 8 00:40:33.508829 containerd[1491]: time="2025-05-08T00:40:33.507845533Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" returns successfully" May 8 00:40:33.509971 containerd[1491]: time="2025-05-08T00:40:33.509858009Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:40:33.510988 containerd[1491]: time="2025-05-08T00:40:33.510965961Z" level=info msg="TearDown network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" successfully" May 8 00:40:33.510988 containerd[1491]: time="2025-05-08T00:40:33.510984247Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" returns successfully" May 8 00:40:33.512125 containerd[1491]: time="2025-05-08T00:40:33.511882379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:2,}" May 8 00:40:33.512673 kubelet[2621]: I0508 00:40:33.512656 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079" May 8 00:40:33.514162 containerd[1491]: time="2025-05-08T00:40:33.513455687Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" May 8 00:40:33.514162 containerd[1491]: time="2025-05-08T00:40:33.514099504Z" level=info msg="Ensure that sandbox d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079 in task-service has been cleanup successfully" May 8 00:40:33.514469 containerd[1491]: time="2025-05-08T00:40:33.514439598Z" level=info msg="TearDown network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" successfully" May 8 00:40:33.514469 containerd[1491]: time="2025-05-08T00:40:33.514460996Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" returns successfully" May 8 00:40:33.515051 containerd[1491]: time="2025-05-08T00:40:33.515021204Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:40:33.515188 containerd[1491]: time="2025-05-08T00:40:33.515096899Z" level=info msg="TearDown network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" successfully" May 8 00:40:33.515188 containerd[1491]: time="2025-05-08T00:40:33.515107122Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" returns successfully" May 8 00:40:33.515649 kubelet[2621]: E0508 00:40:33.515635 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:33.516089 containerd[1491]: time="2025-05-08T00:40:33.516060693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:2,}" May 8 00:40:33.516237 kubelet[2621]: I0508 00:40:33.516184 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65" May 8 00:40:33.517509 containerd[1491]: time="2025-05-08T00:40:33.517260156Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" May 8 00:40:33.517509 containerd[1491]: time="2025-05-08T00:40:33.517430563Z" level=info msg="Ensure that sandbox 45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65 in task-service has been cleanup successfully" May 8 00:40:33.517590 containerd[1491]: time="2025-05-08T00:40:33.517574811Z" level=info msg="TearDown network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" successfully" May 8 00:40:33.517590 containerd[1491]: time="2025-05-08T00:40:33.517586925Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" returns successfully" May 8 00:40:33.521121 containerd[1491]: time="2025-05-08T00:40:33.519878915Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:40:33.521432 kubelet[2621]: I0508 00:40:33.521205 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806" May 8 00:40:33.522176 containerd[1491]: time="2025-05-08T00:40:33.521510573Z" level=info msg="TearDown network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" successfully" May 8 00:40:33.522176 containerd[1491]: time="2025-05-08T00:40:33.521529039Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" returns successfully" May 8 00:40:33.522176 containerd[1491]: time="2025-05-08T00:40:33.522059608Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" May 8 00:40:33.523692 containerd[1491]: time="2025-05-08T00:40:33.522200346Z" level=info msg="Ensure that sandbox a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806 in task-service has been cleanup successfully" May 8 00:40:33.523692 containerd[1491]: time="2025-05-08T00:40:33.522777949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:2,}" May 8 00:40:33.523692 containerd[1491]: time="2025-05-08T00:40:33.523461599Z" level=info msg="TearDown network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" successfully" May 8 00:40:33.523692 containerd[1491]: time="2025-05-08T00:40:33.523487827Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" returns successfully" May 8 00:40:33.523692 containerd[1491]: time="2025-05-08T00:40:33.523637378Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:40:33.524084 containerd[1491]: time="2025-05-08T00:40:33.523702690Z" level=info msg="TearDown network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" successfully" May 8 00:40:33.524084 containerd[1491]: time="2025-05-08T00:40:33.523712513Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" returns successfully" May 8 00:40:33.526215 containerd[1491]: time="2025-05-08T00:40:33.525841308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:2,}" May 8 00:40:33.527791 containerd[1491]: time="2025-05-08T00:40:33.527619555Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" May 8 00:40:33.527791 containerd[1491]: time="2025-05-08T00:40:33.527771196Z" level=info msg="Ensure that sandbox ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479 in task-service has been cleanup successfully" May 8 00:40:33.528005 kubelet[2621]: I0508 00:40:33.527244 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479" May 8 00:40:33.528043 containerd[1491]: time="2025-05-08T00:40:33.527904521Z" level=info msg="TearDown network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" successfully" May 8 00:40:33.528043 containerd[1491]: time="2025-05-08T00:40:33.527917725Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" returns successfully" May 8 00:40:33.528319 containerd[1491]: time="2025-05-08T00:40:33.528293672Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:40:33.528391 containerd[1491]: time="2025-05-08T00:40:33.528363666Z" level=info msg="TearDown network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" successfully" May 8 00:40:33.528391 containerd[1491]: time="2025-05-08T00:40:33.528379741Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" returns successfully" May 8 00:40:33.529275 kubelet[2621]: E0508 00:40:33.529027 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:33.529398 containerd[1491]: time="2025-05-08T00:40:33.529342254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:2,}" May 8 00:40:33.561461 systemd[1]: run-netns-cni\x2d89d18cce\x2dcca6\x2dfe8b\x2db207\x2db4ecf1ddce90.mount: Deactivated successfully. May 8 00:40:33.561882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479-shm.mount: Deactivated successfully. May 8 00:40:33.561962 systemd[1]: run-netns-cni\x2d3609f406\x2d9312\x2d9752\x2d7ae8\x2d0dc7a49d532e.mount: Deactivated successfully. May 8 00:40:33.562032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806-shm.mount: Deactivated successfully. May 8 00:40:33.562106 systemd[1]: run-netns-cni\x2d705aa592\x2d799a\x2d5bfe\x2d416b\x2d64e51097e0af.mount: Deactivated successfully. May 8 00:40:33.562174 systemd[1]: run-netns-cni\x2dcebb8988\x2d33a0\x2d1052\x2d6149\x2d5b918ff1b184.mount: Deactivated successfully. May 8 00:40:33.562240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079-shm.mount: Deactivated successfully. May 8 00:40:33.562308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb-shm.mount: Deactivated successfully. May 8 00:40:33.562378 systemd[1]: run-netns-cni\x2db92ba8dd\x2d79fc\x2deb72\x2d7830\x2dca698e719749.mount: Deactivated successfully. May 8 00:40:33.562442 systemd[1]: run-netns-cni\x2ddf95472a\x2d941a\x2dc33d\x2db874\x2db7babe773086.mount: Deactivated successfully. May 8 00:40:33.562510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b-shm.mount: Deactivated successfully. May 8 00:40:33.562580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65-shm.mount: Deactivated successfully. May 8 00:40:33.686872 containerd[1491]: time="2025-05-08T00:40:33.686206559Z" level=error msg="Failed to destroy network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.688451 containerd[1491]: time="2025-05-08T00:40:33.688412270Z" level=error msg="encountered an error cleaning up failed sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.688538 containerd[1491]: time="2025-05-08T00:40:33.688508773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.688802 kubelet[2621]: E0508 00:40:33.688746 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.688861 kubelet[2621]: E0508 00:40:33.688823 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:33.688861 kubelet[2621]: E0508 00:40:33.688847 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:33.688929 kubelet[2621]: E0508 00:40:33.688882 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" podUID="f9d2fedb-163e-44bf-9fb8-248856ed9ef7" May 8 00:40:33.758348 containerd[1491]: time="2025-05-08T00:40:33.758185988Z" level=error msg="Failed to destroy network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.759513 containerd[1491]: time="2025-05-08T00:40:33.759400177Z" level=error msg="encountered an error cleaning up failed sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.759813 containerd[1491]: time="2025-05-08T00:40:33.759635296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.760929 kubelet[2621]: E0508 00:40:33.760148 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.760929 kubelet[2621]: E0508 00:40:33.760193 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:33.760929 kubelet[2621]: E0508 00:40:33.760213 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:33.761036 kubelet[2621]: E0508 00:40:33.760249 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" podUID="5b22028e-782d-431d-8b78-71ebd2d0dd5e" May 8 00:40:33.766884 containerd[1491]: time="2025-05-08T00:40:33.766861854Z" level=error msg="Failed to destroy network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.767654 containerd[1491]: time="2025-05-08T00:40:33.767511231Z" level=error msg="encountered an error cleaning up failed sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.767995 containerd[1491]: time="2025-05-08T00:40:33.767740378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.768345 kubelet[2621]: E0508 00:40:33.768325 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.769198 kubelet[2621]: E0508 00:40:33.768444 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:33.769198 kubelet[2621]: E0508 00:40:33.768481 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:33.769261 containerd[1491]: time="2025-05-08T00:40:33.768589104Z" level=error msg="Failed to destroy network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.769339 kubelet[2621]: E0508 00:40:33.768712 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:33.769901 containerd[1491]: time="2025-05-08T00:40:33.769606285Z" level=error msg="encountered an error cleaning up failed sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.770321 containerd[1491]: time="2025-05-08T00:40:33.770184619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.770697 kubelet[2621]: E0508 00:40:33.770678 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.770973 kubelet[2621]: E0508 00:40:33.770801 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:33.770973 kubelet[2621]: E0508 00:40:33.770831 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:33.771166 kubelet[2621]: E0508 00:40:33.771095 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" podUID="e722fa22-1636-4de9-a89f-f2a8e31c4ced" May 8 00:40:33.784545 containerd[1491]: time="2025-05-08T00:40:33.784520706Z" level=error msg="Failed to destroy network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.785560 containerd[1491]: time="2025-05-08T00:40:33.785339340Z" level=error msg="encountered an error cleaning up failed sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.785560 containerd[1491]: time="2025-05-08T00:40:33.785380394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.786330 kubelet[2621]: E0508 00:40:33.785687 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.786330 kubelet[2621]: E0508 00:40:33.785717 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:33.786330 kubelet[2621]: E0508 00:40:33.785733 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:33.786457 kubelet[2621]: E0508 00:40:33.785775 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-brpjg" podUID="2f1333b6-854b-41fd-b0fb-3eb10c2461e2" May 8 00:40:33.794476 containerd[1491]: time="2025-05-08T00:40:33.794452412Z" level=error msg="Failed to destroy network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.796333 containerd[1491]: time="2025-05-08T00:40:33.796266341Z" level=error msg="encountered an error cleaning up failed sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.796995 containerd[1491]: time="2025-05-08T00:40:33.796834232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.797561 kubelet[2621]: E0508 00:40:33.797523 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:33.797724 kubelet[2621]: E0508 00:40:33.797608 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:33.797724 kubelet[2621]: E0508 00:40:33.797639 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:33.798015 kubelet[2621]: E0508 00:40:33.797717 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k4bg9" podUID="100ec8f8-03d9-440e-8703-cce80bc589fd" May 8 00:40:34.533821 kubelet[2621]: I0508 00:40:34.532563 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3" May 8 00:40:34.534398 containerd[1491]: time="2025-05-08T00:40:34.534348775Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\"" May 8 00:40:34.534641 containerd[1491]: time="2025-05-08T00:40:34.534544227Z" level=info msg="Ensure that sandbox f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3 in task-service has been cleanup successfully" May 8 00:40:34.536063 containerd[1491]: time="2025-05-08T00:40:34.535871963Z" level=info msg="TearDown network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" successfully" May 8 00:40:34.536063 containerd[1491]: time="2025-05-08T00:40:34.535943056Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" returns successfully" May 8 00:40:34.536262 containerd[1491]: time="2025-05-08T00:40:34.536161936Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" May 8 00:40:34.536841 containerd[1491]: time="2025-05-08T00:40:34.536512958Z" level=info msg="TearDown network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" successfully" May 8 00:40:34.536841 containerd[1491]: time="2025-05-08T00:40:34.536554631Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" returns successfully" May 8 00:40:34.537788 containerd[1491]: time="2025-05-08T00:40:34.537738521Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:40:34.538142 containerd[1491]: time="2025-05-08T00:40:34.538122484Z" level=info msg="TearDown network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" successfully" May 8 00:40:34.538142 containerd[1491]: time="2025-05-08T00:40:34.538139139Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" returns successfully" May 8 00:40:34.538869 kubelet[2621]: I0508 00:40:34.538807 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56" May 8 00:40:34.539513 containerd[1491]: time="2025-05-08T00:40:34.538838833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:3,}" May 8 00:40:34.541520 containerd[1491]: time="2025-05-08T00:40:34.541392662Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\"" May 8 00:40:34.542206 containerd[1491]: time="2025-05-08T00:40:34.541852749Z" level=info msg="Ensure that sandbox a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56 in task-service has been cleanup successfully" May 8 00:40:34.544982 containerd[1491]: time="2025-05-08T00:40:34.544888081Z" level=info msg="TearDown network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" successfully" May 8 00:40:34.544982 containerd[1491]: time="2025-05-08T00:40:34.544934126Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" returns successfully" May 8 00:40:34.545809 containerd[1491]: time="2025-05-08T00:40:34.545790021Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" May 8 00:40:34.546027 kubelet[2621]: I0508 00:40:34.546012 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a" May 8 00:40:34.547116 containerd[1491]: time="2025-05-08T00:40:34.546892684Z" level=info msg="TearDown network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" successfully" May 8 00:40:34.547116 containerd[1491]: time="2025-05-08T00:40:34.547109144Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" returns successfully" May 8 00:40:34.549276 containerd[1491]: time="2025-05-08T00:40:34.549157000Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:40:34.550135 containerd[1491]: time="2025-05-08T00:40:34.549865727Z" level=info msg="TearDown network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" successfully" May 8 00:40:34.550480 containerd[1491]: time="2025-05-08T00:40:34.550446102Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" returns successfully" May 8 00:40:34.551225 containerd[1491]: time="2025-05-08T00:40:34.550028219Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\"" May 8 00:40:34.551225 containerd[1491]: time="2025-05-08T00:40:34.551024708Z" level=info msg="Ensure that sandbox d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a in task-service has been cleanup successfully" May 8 00:40:34.551225 containerd[1491]: time="2025-05-08T00:40:34.551162072Z" level=info msg="TearDown network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" successfully" May 8 00:40:34.551225 containerd[1491]: time="2025-05-08T00:40:34.551174186Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" returns successfully" May 8 00:40:34.551819 kubelet[2621]: E0508 00:40:34.551465 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:34.552863 containerd[1491]: time="2025-05-08T00:40:34.552140205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:3,}" May 8 00:40:34.553680 containerd[1491]: time="2025-05-08T00:40:34.553379703Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" May 8 00:40:34.558390 kubelet[2621]: I0508 00:40:34.554560 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962" May 8 00:40:34.558433 containerd[1491]: time="2025-05-08T00:40:34.555362948Z" level=info msg="TearDown network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" successfully" May 8 00:40:34.558433 containerd[1491]: time="2025-05-08T00:40:34.555378543Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" returns successfully" May 8 00:40:34.560583 containerd[1491]: time="2025-05-08T00:40:34.560241511Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\"" May 8 00:40:34.561426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42-shm.mount: Deactivated successfully. May 8 00:40:34.561528 systemd[1]: run-netns-cni\x2d639fb441\x2d5e4e\x2dcb4a\x2d50e4\x2d20dacda87635.mount: Deactivated successfully. May 8 00:40:34.561603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3-shm.mount: Deactivated successfully. May 8 00:40:34.561677 systemd[1]: run-netns-cni\x2d4da57fa8\x2dcdcb\x2db336\x2d5353\x2d3432214460d8.mount: Deactivated successfully. May 8 00:40:34.561742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a-shm.mount: Deactivated successfully. May 8 00:40:34.563422 containerd[1491]: time="2025-05-08T00:40:34.562976177Z" level=info msg="Ensure that sandbox 58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962 in task-service has been cleanup successfully" May 8 00:40:34.563169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c-shm.mount: Deactivated successfully. May 8 00:40:34.563249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962-shm.mount: Deactivated successfully. May 8 00:40:34.570906 containerd[1491]: time="2025-05-08T00:40:34.564188336Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:40:34.570906 containerd[1491]: time="2025-05-08T00:40:34.568510671Z" level=info msg="TearDown network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" successfully" May 8 00:40:34.570906 containerd[1491]: time="2025-05-08T00:40:34.568520904Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" returns successfully" May 8 00:40:34.571313 containerd[1491]: time="2025-05-08T00:40:34.571000268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:3,}" May 8 00:40:34.572354 systemd[1]: run-netns-cni\x2d5aadc437\x2de154\x2d72ca\x2dcb97\x2daf444485f199.mount: Deactivated successfully. May 8 00:40:34.576773 containerd[1491]: time="2025-05-08T00:40:34.574811819Z" level=info msg="TearDown network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" successfully" May 8 00:40:34.576773 containerd[1491]: time="2025-05-08T00:40:34.574832826Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" returns successfully" May 8 00:40:34.583450 containerd[1491]: time="2025-05-08T00:40:34.583418148Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" May 8 00:40:34.583721 containerd[1491]: time="2025-05-08T00:40:34.583697237Z" level=info msg="TearDown network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" successfully" May 8 00:40:34.583721 containerd[1491]: time="2025-05-08T00:40:34.583708320Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" returns successfully" May 8 00:40:34.585304 kubelet[2621]: I0508 00:40:34.585239 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c" May 8 00:40:34.586570 containerd[1491]: time="2025-05-08T00:40:34.586528254Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:40:34.587271 containerd[1491]: time="2025-05-08T00:40:34.587232750Z" level=info msg="TearDown network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" successfully" May 8 00:40:34.587271 containerd[1491]: time="2025-05-08T00:40:34.587251966Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" returns successfully" May 8 00:40:34.588773 containerd[1491]: time="2025-05-08T00:40:34.588615072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:3,}" May 8 00:40:34.589652 containerd[1491]: time="2025-05-08T00:40:34.589589755Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\"" May 8 00:40:34.590378 containerd[1491]: time="2025-05-08T00:40:34.590036818Z" level=info msg="Ensure that sandbox 05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c in task-service has been cleanup successfully" May 8 00:40:34.594949 containerd[1491]: time="2025-05-08T00:40:34.592956923Z" level=info msg="TearDown network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" successfully" May 8 00:40:34.594949 containerd[1491]: time="2025-05-08T00:40:34.592973729Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" returns successfully" May 8 00:40:34.596042 systemd[1]: run-netns-cni\x2d0945258a\x2d3a38\x2d1cdd\x2d90f5\x2d49fb9ebb0134.mount: Deactivated successfully. May 8 00:40:34.597945 containerd[1491]: time="2025-05-08T00:40:34.597926726Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" May 8 00:40:34.599297 containerd[1491]: time="2025-05-08T00:40:34.599245219Z" level=info msg="TearDown network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" successfully" May 8 00:40:34.600444 kubelet[2621]: I0508 00:40:34.600426 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42" May 8 00:40:34.600942 containerd[1491]: time="2025-05-08T00:40:34.600909992Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" returns successfully" May 8 00:40:34.601735 containerd[1491]: time="2025-05-08T00:40:34.601716651Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:40:34.602001 containerd[1491]: time="2025-05-08T00:40:34.601921336Z" level=info msg="TearDown network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" successfully" May 8 00:40:34.603488 containerd[1491]: time="2025-05-08T00:40:34.603461930Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" returns successfully" May 8 00:40:34.604069 containerd[1491]: time="2025-05-08T00:40:34.602330717Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\"" May 8 00:40:34.605212 containerd[1491]: time="2025-05-08T00:40:34.605181851Z" level=info msg="Ensure that sandbox 56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42 in task-service has been cleanup successfully" May 8 00:40:34.606335 containerd[1491]: time="2025-05-08T00:40:34.605444765Z" level=info msg="TearDown network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" successfully" May 8 00:40:34.606335 containerd[1491]: time="2025-05-08T00:40:34.606278122Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" returns successfully" May 8 00:40:34.606598 containerd[1491]: time="2025-05-08T00:40:34.605686393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:3,}" May 8 00:40:34.608237 containerd[1491]: time="2025-05-08T00:40:34.608116531Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" May 8 00:40:34.608237 containerd[1491]: time="2025-05-08T00:40:34.608189764Z" level=info msg="TearDown network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" successfully" May 8 00:40:34.608237 containerd[1491]: time="2025-05-08T00:40:34.608199658Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" returns successfully" May 8 00:40:34.611539 containerd[1491]: time="2025-05-08T00:40:34.611502896Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:40:34.612360 containerd[1491]: time="2025-05-08T00:40:34.612132618Z" level=info msg="TearDown network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" successfully" May 8 00:40:34.612360 containerd[1491]: time="2025-05-08T00:40:34.612355159Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" returns successfully" May 8 00:40:34.612952 kubelet[2621]: E0508 00:40:34.612894 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:34.616633 containerd[1491]: time="2025-05-08T00:40:34.616029527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:3,}" May 8 00:40:34.765788 containerd[1491]: time="2025-05-08T00:40:34.765704255Z" level=error msg="Failed to destroy network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.767504 containerd[1491]: time="2025-05-08T00:40:34.767114196Z" level=error msg="encountered an error cleaning up failed sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.767575 containerd[1491]: time="2025-05-08T00:40:34.767530050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.768161 kubelet[2621]: E0508 00:40:34.768123 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.768298 kubelet[2621]: E0508 00:40:34.768266 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:34.768435 kubelet[2621]: E0508 00:40:34.768374 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:34.769497 containerd[1491]: time="2025-05-08T00:40:34.768990737Z" level=error msg="Failed to destroy network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.769497 containerd[1491]: time="2025-05-08T00:40:34.769284681Z" level=error msg="encountered an error cleaning up failed sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.769497 containerd[1491]: time="2025-05-08T00:40:34.769331367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.770776 kubelet[2621]: E0508 00:40:34.770111 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.770776 kubelet[2621]: E0508 00:40:34.770142 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:34.770776 kubelet[2621]: E0508 00:40:34.770162 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:34.770867 kubelet[2621]: E0508 00:40:34.770206 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k4bg9" podUID="100ec8f8-03d9-440e-8703-cce80bc589fd" May 8 00:40:34.771254 kubelet[2621]: E0508 00:40:34.771036 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" podUID="e722fa22-1636-4de9-a89f-f2a8e31c4ced" May 8 00:40:34.823080 containerd[1491]: time="2025-05-08T00:40:34.821598704Z" level=error msg="Failed to destroy network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.823080 containerd[1491]: time="2025-05-08T00:40:34.822059972Z" level=error msg="encountered an error cleaning up failed sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.823080 containerd[1491]: time="2025-05-08T00:40:34.822118470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.823396 kubelet[2621]: E0508 00:40:34.822585 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.823396 kubelet[2621]: E0508 00:40:34.822637 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:34.823396 kubelet[2621]: E0508 00:40:34.822656 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:34.823552 kubelet[2621]: E0508 00:40:34.822700 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" podUID="5b22028e-782d-431d-8b78-71ebd2d0dd5e" May 8 00:40:34.827947 containerd[1491]: time="2025-05-08T00:40:34.827907225Z" level=error msg="Failed to destroy network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.828811 containerd[1491]: time="2025-05-08T00:40:34.828399403Z" level=error msg="Failed to destroy network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.828811 containerd[1491]: time="2025-05-08T00:40:34.828642160Z" level=error msg="encountered an error cleaning up failed sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.828811 containerd[1491]: time="2025-05-08T00:40:34.828700249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.828941 kubelet[2621]: E0508 00:40:34.828885 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.828941 kubelet[2621]: E0508 00:40:34.828924 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:34.829003 kubelet[2621]: E0508 00:40:34.828946 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:34.829003 kubelet[2621]: E0508 00:40:34.828981 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" podUID="f9d2fedb-163e-44bf-9fb8-248856ed9ef7" May 8 00:40:34.831127 containerd[1491]: time="2025-05-08T00:40:34.830951150Z" level=error msg="encountered an error cleaning up failed sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.831127 containerd[1491]: time="2025-05-08T00:40:34.831001476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.831597 kubelet[2621]: E0508 00:40:34.831254 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.831597 kubelet[2621]: E0508 00:40:34.831614 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:34.831597 kubelet[2621]: E0508 00:40:34.831635 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:34.832075 kubelet[2621]: E0508 00:40:34.831693 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:34.848516 containerd[1491]: time="2025-05-08T00:40:34.848457600Z" level=error msg="Failed to destroy network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.849258 containerd[1491]: time="2025-05-08T00:40:34.849168157Z" level=error msg="encountered an error cleaning up failed sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.849473 containerd[1491]: time="2025-05-08T00:40:34.849288005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.849567 kubelet[2621]: E0508 00:40:34.849532 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.849605 kubelet[2621]: E0508 00:40:34.849580 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:34.849916 kubelet[2621]: E0508 00:40:34.849601 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:34.849916 kubelet[2621]: E0508 00:40:34.849634 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-brpjg" podUID="2f1333b6-854b-41fd-b0fb-3eb10c2461e2" May 8 00:40:35.563451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a-shm.mount: Deactivated successfully. May 8 00:40:35.563822 systemd[1]: run-netns-cni\x2d19ca5793\x2d1fdb\x2d8f9f\x2dad2f\x2d74b9c2c32ad5.mount: Deactivated successfully. May 8 00:40:35.607567 containerd[1491]: time="2025-05-08T00:40:35.607151797Z" level=info msg="StopPodSandbox for \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\"" May 8 00:40:35.607567 containerd[1491]: time="2025-05-08T00:40:35.607339055Z" level=info msg="Ensure that sandbox b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1 in task-service has been cleanup successfully" May 8 00:40:35.607969 kubelet[2621]: I0508 00:40:35.607918 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1" May 8 00:40:35.610155 systemd[1]: run-netns-cni\x2d3b036cbd\x2d75df\x2df0ac\x2d97f6\x2dc0196c6aa3e0.mount: Deactivated successfully. May 8 00:40:35.610507 containerd[1491]: time="2025-05-08T00:40:35.610470403Z" level=info msg="TearDown network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" successfully" May 8 00:40:35.610613 containerd[1491]: time="2025-05-08T00:40:35.610566942Z" level=info msg="StopPodSandbox for \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" returns successfully" May 8 00:40:35.612850 containerd[1491]: time="2025-05-08T00:40:35.612613318Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\"" May 8 00:40:35.612850 containerd[1491]: time="2025-05-08T00:40:35.612692642Z" level=info msg="TearDown network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" successfully" May 8 00:40:35.612850 containerd[1491]: time="2025-05-08T00:40:35.612730124Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" returns successfully" May 8 00:40:35.613694 containerd[1491]: time="2025-05-08T00:40:35.613678073Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" May 8 00:40:35.614137 containerd[1491]: time="2025-05-08T00:40:35.614121299Z" level=info msg="TearDown network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" successfully" May 8 00:40:35.614458 containerd[1491]: time="2025-05-08T00:40:35.614444528Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" returns successfully" May 8 00:40:35.614673 containerd[1491]: time="2025-05-08T00:40:35.614657043Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:40:35.615300 containerd[1491]: time="2025-05-08T00:40:35.615235220Z" level=info msg="TearDown network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" successfully" May 8 00:40:35.615300 containerd[1491]: time="2025-05-08T00:40:35.615249554Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" returns successfully" May 8 00:40:35.615613 kubelet[2621]: I0508 00:40:35.615592 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8" May 8 00:40:35.617011 containerd[1491]: time="2025-05-08T00:40:35.616942562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:4,}" May 8 00:40:35.621549 containerd[1491]: time="2025-05-08T00:40:35.619376466Z" level=info msg="StopPodSandbox for \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\"" May 8 00:40:35.621549 containerd[1491]: time="2025-05-08T00:40:35.619515179Z" level=info msg="Ensure that sandbox 8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8 in task-service has been cleanup successfully" May 8 00:40:35.621392 systemd[1]: run-netns-cni\x2dde4afc14\x2da1fa\x2dff1a\x2d7ecc\x2d3aecff1f4324.mount: Deactivated successfully. May 8 00:40:35.622544 containerd[1491]: time="2025-05-08T00:40:35.622526370Z" level=info msg="TearDown network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" successfully" May 8 00:40:35.622690 containerd[1491]: time="2025-05-08T00:40:35.622628021Z" level=info msg="StopPodSandbox for \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" returns successfully" May 8 00:40:35.624668 containerd[1491]: time="2025-05-08T00:40:35.624601485Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\"" May 8 00:40:35.624737 containerd[1491]: time="2025-05-08T00:40:35.624709658Z" level=info msg="TearDown network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" successfully" May 8 00:40:35.624932 containerd[1491]: time="2025-05-08T00:40:35.624809289Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" returns successfully" May 8 00:40:35.626402 containerd[1491]: time="2025-05-08T00:40:35.625933332Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" May 8 00:40:35.626402 containerd[1491]: time="2025-05-08T00:40:35.626007034Z" level=info msg="TearDown network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" successfully" May 8 00:40:35.626402 containerd[1491]: time="2025-05-08T00:40:35.626016627Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" returns successfully" May 8 00:40:35.627598 containerd[1491]: time="2025-05-08T00:40:35.626409588Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:40:35.627598 containerd[1491]: time="2025-05-08T00:40:35.626477959Z" level=info msg="TearDown network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" successfully" May 8 00:40:35.627598 containerd[1491]: time="2025-05-08T00:40:35.626486812Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" returns successfully" May 8 00:40:35.627598 containerd[1491]: time="2025-05-08T00:40:35.627558319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:4,}" May 8 00:40:35.627967 kubelet[2621]: E0508 00:40:35.626730 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:35.629198 kubelet[2621]: I0508 00:40:35.628924 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6" May 8 00:40:35.640600 containerd[1491]: time="2025-05-08T00:40:35.640151190Z" level=info msg="StopPodSandbox for \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\"" May 8 00:40:35.640600 containerd[1491]: time="2025-05-08T00:40:35.640331437Z" level=info msg="Ensure that sandbox 002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6 in task-service has been cleanup successfully" May 8 00:40:35.641050 containerd[1491]: time="2025-05-08T00:40:35.641015956Z" level=info msg="TearDown network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" successfully" May 8 00:40:35.641050 containerd[1491]: time="2025-05-08T00:40:35.641039023Z" level=info msg="StopPodSandbox for \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" returns successfully" May 8 00:40:35.643497 containerd[1491]: time="2025-05-08T00:40:35.643469306Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\"" May 8 00:40:35.643586 containerd[1491]: time="2025-05-08T00:40:35.643552021Z" level=info msg="TearDown network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" successfully" May 8 00:40:35.643586 containerd[1491]: time="2025-05-08T00:40:35.643566425Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" returns successfully" May 8 00:40:35.644126 containerd[1491]: time="2025-05-08T00:40:35.644087985Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" May 8 00:40:35.644182 containerd[1491]: time="2025-05-08T00:40:35.644163838Z" level=info msg="TearDown network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" successfully" May 8 00:40:35.644182 containerd[1491]: time="2025-05-08T00:40:35.644178762Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" returns successfully" May 8 00:40:35.645109 containerd[1491]: time="2025-05-08T00:40:35.645080509Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:40:35.645175 containerd[1491]: time="2025-05-08T00:40:35.645153511Z" level=info msg="TearDown network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" successfully" May 8 00:40:35.645175 containerd[1491]: time="2025-05-08T00:40:35.645169466Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" returns successfully" May 8 00:40:35.645929 containerd[1491]: time="2025-05-08T00:40:35.645891596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:4,}" May 8 00:40:35.646531 kubelet[2621]: I0508 00:40:35.646505 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77" May 8 00:40:35.649978 containerd[1491]: time="2025-05-08T00:40:35.649913407Z" level=info msg="StopPodSandbox for \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\"" May 8 00:40:35.650151 containerd[1491]: time="2025-05-08T00:40:35.650122560Z" level=info msg="Ensure that sandbox 2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77 in task-service has been cleanup successfully" May 8 00:40:35.650725 containerd[1491]: time="2025-05-08T00:40:35.650678481Z" level=info msg="TearDown network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" successfully" May 8 00:40:35.650725 containerd[1491]: time="2025-05-08T00:40:35.650702989Z" level=info msg="StopPodSandbox for \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" returns successfully" May 8 00:40:35.651399 containerd[1491]: time="2025-05-08T00:40:35.651311004Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\"" May 8 00:40:35.651399 containerd[1491]: time="2025-05-08T00:40:35.651383997Z" level=info msg="TearDown network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" successfully" May 8 00:40:35.651399 containerd[1491]: time="2025-05-08T00:40:35.651394570Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" returns successfully" May 8 00:40:35.653073 containerd[1491]: time="2025-05-08T00:40:35.653016316Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" May 8 00:40:35.653120 containerd[1491]: time="2025-05-08T00:40:35.653088238Z" level=info msg="TearDown network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" successfully" May 8 00:40:35.653120 containerd[1491]: time="2025-05-08T00:40:35.653098191Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" returns successfully" May 8 00:40:35.656744 containerd[1491]: time="2025-05-08T00:40:35.656522309Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:40:35.656744 containerd[1491]: time="2025-05-08T00:40:35.656593210Z" level=info msg="TearDown network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" successfully" May 8 00:40:35.656744 containerd[1491]: time="2025-05-08T00:40:35.656603384Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" returns successfully" May 8 00:40:35.661939 containerd[1491]: time="2025-05-08T00:40:35.661912767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:4,}" May 8 00:40:35.666281 kubelet[2621]: I0508 00:40:35.666248 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a" May 8 00:40:35.669703 containerd[1491]: time="2025-05-08T00:40:35.669638130Z" level=info msg="StopPodSandbox for \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\"" May 8 00:40:35.672979 containerd[1491]: time="2025-05-08T00:40:35.672815642Z" level=info msg="Ensure that sandbox f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a in task-service has been cleanup successfully" May 8 00:40:35.675788 containerd[1491]: time="2025-05-08T00:40:35.675678127Z" level=info msg="TearDown network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" successfully" May 8 00:40:35.675887 containerd[1491]: time="2025-05-08T00:40:35.675872067Z" level=info msg="StopPodSandbox for \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" returns successfully" May 8 00:40:35.678194 containerd[1491]: time="2025-05-08T00:40:35.678175141Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\"" May 8 00:40:35.678437 containerd[1491]: time="2025-05-08T00:40:35.678367361Z" level=info msg="TearDown network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" successfully" May 8 00:40:35.678437 containerd[1491]: time="2025-05-08T00:40:35.678383145Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" returns successfully" May 8 00:40:35.680826 containerd[1491]: time="2025-05-08T00:40:35.680795713Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" May 8 00:40:35.681126 containerd[1491]: time="2025-05-08T00:40:35.680880129Z" level=info msg="TearDown network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" successfully" May 8 00:40:35.681126 containerd[1491]: time="2025-05-08T00:40:35.680897104Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" returns successfully" May 8 00:40:35.682681 containerd[1491]: time="2025-05-08T00:40:35.682648750Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:40:35.683319 containerd[1491]: time="2025-05-08T00:40:35.683300499Z" level=info msg="TearDown network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" successfully" May 8 00:40:35.683482 containerd[1491]: time="2025-05-08T00:40:35.683371661Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" returns successfully" May 8 00:40:35.683992 kubelet[2621]: I0508 00:40:35.683975 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1" May 8 00:40:35.689796 containerd[1491]: time="2025-05-08T00:40:35.689423902Z" level=info msg="StopPodSandbox for \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\"" May 8 00:40:35.689796 containerd[1491]: time="2025-05-08T00:40:35.689572767Z" level=info msg="Ensure that sandbox 358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1 in task-service has been cleanup successfully" May 8 00:40:35.691841 containerd[1491]: time="2025-05-08T00:40:35.690547026Z" level=info msg="TearDown network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" successfully" May 8 00:40:35.691841 containerd[1491]: time="2025-05-08T00:40:35.690562861Z" level=info msg="StopPodSandbox for \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" returns successfully" May 8 00:40:35.692299 containerd[1491]: time="2025-05-08T00:40:35.692271123Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\"" May 8 00:40:35.692634 containerd[1491]: time="2025-05-08T00:40:35.692617379Z" level=info msg="TearDown network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" successfully" May 8 00:40:35.692772 containerd[1491]: time="2025-05-08T00:40:35.692708097Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" returns successfully" May 8 00:40:35.692772 containerd[1491]: time="2025-05-08T00:40:35.692465833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:4,}" May 8 00:40:35.693727 containerd[1491]: time="2025-05-08T00:40:35.693596938Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" May 8 00:40:35.693727 containerd[1491]: time="2025-05-08T00:40:35.693680254Z" level=info msg="TearDown network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" successfully" May 8 00:40:35.693727 containerd[1491]: time="2025-05-08T00:40:35.693689767Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" returns successfully" May 8 00:40:35.697610 containerd[1491]: time="2025-05-08T00:40:35.697251526Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:40:35.697610 containerd[1491]: time="2025-05-08T00:40:35.697351097Z" level=info msg="TearDown network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" successfully" May 8 00:40:35.697610 containerd[1491]: time="2025-05-08T00:40:35.697361970Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" returns successfully" May 8 00:40:35.697733 kubelet[2621]: E0508 00:40:35.697691 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:35.701245 containerd[1491]: time="2025-05-08T00:40:35.701225502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:4,}" May 8 00:40:35.818455 containerd[1491]: time="2025-05-08T00:40:35.818244044Z" level=error msg="Failed to destroy network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.824617 containerd[1491]: time="2025-05-08T00:40:35.823710616Z" level=error msg="encountered an error cleaning up failed sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.824617 containerd[1491]: time="2025-05-08T00:40:35.823784469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.826672 kubelet[2621]: E0508 00:40:35.826626 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.826747 kubelet[2621]: E0508 00:40:35.826692 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:35.826747 kubelet[2621]: E0508 00:40:35.826715 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" May 8 00:40:35.826834 kubelet[2621]: E0508 00:40:35.826797 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-82nqn_calico-apiserver(5b22028e-782d-431d-8b78-71ebd2d0dd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" podUID="5b22028e-782d-431d-8b78-71ebd2d0dd5e" May 8 00:40:35.868529 containerd[1491]: time="2025-05-08T00:40:35.868484422Z" level=error msg="Failed to destroy network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.869779 containerd[1491]: time="2025-05-08T00:40:35.869725971Z" level=error msg="encountered an error cleaning up failed sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.870103 containerd[1491]: time="2025-05-08T00:40:35.870067745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.870704 kubelet[2621]: E0508 00:40:35.870652 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.870773 kubelet[2621]: E0508 00:40:35.870716 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:35.870773 kubelet[2621]: E0508 00:40:35.870735 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-brpjg" May 8 00:40:35.870860 kubelet[2621]: E0508 00:40:35.870827 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-brpjg_kube-system(2f1333b6-854b-41fd-b0fb-3eb10c2461e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-brpjg" podUID="2f1333b6-854b-41fd-b0fb-3eb10c2461e2" May 8 00:40:35.908827 containerd[1491]: time="2025-05-08T00:40:35.908773005Z" level=error msg="Failed to destroy network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.909121 containerd[1491]: time="2025-05-08T00:40:35.909087151Z" level=error msg="encountered an error cleaning up failed sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.909159 containerd[1491]: time="2025-05-08T00:40:35.909143478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.909388 kubelet[2621]: E0508 00:40:35.909342 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.909497 kubelet[2621]: E0508 00:40:35.909394 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:35.909529 kubelet[2621]: E0508 00:40:35.909499 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k4bg9" May 8 00:40:35.909712 kubelet[2621]: E0508 00:40:35.909671 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k4bg9_kube-system(100ec8f8-03d9-440e-8703-cce80bc589fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k4bg9" podUID="100ec8f8-03d9-440e-8703-cce80bc589fd" May 8 00:40:35.921495 containerd[1491]: time="2025-05-08T00:40:35.921394695Z" level=error msg="Failed to destroy network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.921789 containerd[1491]: time="2025-05-08T00:40:35.921733209Z" level=error msg="encountered an error cleaning up failed sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.921913 containerd[1491]: time="2025-05-08T00:40:35.921881994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.922507 kubelet[2621]: E0508 00:40:35.922072 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.922507 kubelet[2621]: E0508 00:40:35.922106 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:35.922507 kubelet[2621]: E0508 00:40:35.922123 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" May 8 00:40:35.922606 kubelet[2621]: E0508 00:40:35.922154 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64dc6f8966-xqfsx_calico-system(e722fa22-1636-4de9-a89f-f2a8e31c4ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" podUID="e722fa22-1636-4de9-a89f-f2a8e31c4ced" May 8 00:40:35.930815 containerd[1491]: time="2025-05-08T00:40:35.930780256Z" level=error msg="Failed to destroy network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.931115 containerd[1491]: time="2025-05-08T00:40:35.931078807Z" level=error msg="encountered an error cleaning up failed sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.931183 containerd[1491]: time="2025-05-08T00:40:35.931123521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.932092 kubelet[2621]: E0508 00:40:35.931229 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.932092 kubelet[2621]: E0508 00:40:35.931262 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:35.932092 kubelet[2621]: E0508 00:40:35.931278 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" May 8 00:40:35.932175 kubelet[2621]: E0508 00:40:35.931305 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8467bd5dfd-5dthz_calico-apiserver(f9d2fedb-163e-44bf-9fb8-248856ed9ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" podUID="f9d2fedb-163e-44bf-9fb8-248856ed9ef7" May 8 00:40:35.936545 containerd[1491]: time="2025-05-08T00:40:35.936444588Z" level=error msg="Failed to destroy network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.936851 containerd[1491]: time="2025-05-08T00:40:35.936828096Z" level=error msg="encountered an error cleaning up failed sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.937161 containerd[1491]: time="2025-05-08T00:40:35.937142512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.937779 kubelet[2621]: E0508 00:40:35.937384 2621 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.937779 kubelet[2621]: E0508 00:40:35.937447 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:35.937779 kubelet[2621]: E0508 00:40:35.937468 2621 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zddgj" May 8 00:40:35.937872 kubelet[2621]: E0508 00:40:35.937514 2621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zddgj_calico-system(4a17ca9d-8804-4e7c-a9df-ca043ad979cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zddgj" podUID="4a17ca9d-8804-4e7c-a9df-ca043ad979cf" May 8 00:40:36.139162 containerd[1491]: time="2025-05-08T00:40:36.138423871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:36.139738 containerd[1491]: time="2025-05-08T00:40:36.139660112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:40:36.140278 containerd[1491]: time="2025-05-08T00:40:36.140219266Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:36.141707 containerd[1491]: time="2025-05-08T00:40:36.141685404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:36.142452 containerd[1491]: time="2025-05-08T00:40:36.142305205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 4.676005014s" May 8 00:40:36.142452 containerd[1491]: time="2025-05-08T00:40:36.142333203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:40:36.151013 containerd[1491]: time="2025-05-08T00:40:36.150979941Z" level=info msg="CreateContainer within sandbox \"2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:40:36.163233 containerd[1491]: time="2025-05-08T00:40:36.163201492Z" level=info msg="CreateContainer within sandbox \"2c9b94febec625a8bb3437fd7b9547c858dce531de9ff513f3bb3ade2e2fc88e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41dd7d11b0386b4405a6d9ce66357a8dfe653656cfc68a3c8c082e789ff861a6\"" May 8 00:40:36.163628 containerd[1491]: time="2025-05-08T00:40:36.163598738Z" level=info msg="StartContainer for \"41dd7d11b0386b4405a6d9ce66357a8dfe653656cfc68a3c8c082e789ff861a6\"" May 8 00:40:36.192906 systemd[1]: Started cri-containerd-41dd7d11b0386b4405a6d9ce66357a8dfe653656cfc68a3c8c082e789ff861a6.scope - libcontainer container 41dd7d11b0386b4405a6d9ce66357a8dfe653656cfc68a3c8c082e789ff861a6. May 8 00:40:36.232232 containerd[1491]: time="2025-05-08T00:40:36.232201246Z" level=info msg="StartContainer for \"41dd7d11b0386b4405a6d9ce66357a8dfe653656cfc68a3c8c082e789ff861a6\" returns successfully" May 8 00:40:36.302897 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:40:36.302982 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:40:36.561606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c-shm.mount: Deactivated successfully. May 8 00:40:36.561722 systemd[1]: run-netns-cni\x2dc127705e\x2d7dca\x2d9afa\x2dffea\x2df4a334835f65.mount: Deactivated successfully. May 8 00:40:36.561828 systemd[1]: run-netns-cni\x2d6f6430f8\x2deab0\x2d9e2f\x2d99a2\x2d9b1bf6a289e0.mount: Deactivated successfully. May 8 00:40:36.561901 systemd[1]: run-netns-cni\x2d67d4c079\x2dc0d7\x2de585\x2dac4b\x2da0fa1286d91f.mount: Deactivated successfully. May 8 00:40:36.561967 systemd[1]: run-netns-cni\x2d7c832b99\x2d3070\x2d9ba3\x2dca0a\x2d7e85f16ffb01.mount: Deactivated successfully. May 8 00:40:36.562032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876459097.mount: Deactivated successfully. May 8 00:40:36.687034 kubelet[2621]: I0508 00:40:36.687004 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913" May 8 00:40:36.688169 containerd[1491]: time="2025-05-08T00:40:36.687807571Z" level=info msg="StopPodSandbox for \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\"" May 8 00:40:36.688654 containerd[1491]: time="2025-05-08T00:40:36.688629361Z" level=info msg="Ensure that sandbox cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913 in task-service has been cleanup successfully" May 8 00:40:36.690681 containerd[1491]: time="2025-05-08T00:40:36.690566497Z" level=info msg="TearDown network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\" successfully" May 8 00:40:36.690681 containerd[1491]: time="2025-05-08T00:40:36.690591685Z" level=info msg="StopPodSandbox for \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\" returns successfully" May 8 00:40:36.690916 containerd[1491]: time="2025-05-08T00:40:36.690891912Z" level=info msg="StopPodSandbox for \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\"" May 8 00:40:36.690985 containerd[1491]: time="2025-05-08T00:40:36.690966504Z" level=info msg="TearDown network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" successfully" May 8 00:40:36.691007 containerd[1491]: time="2025-05-08T00:40:36.690987300Z" level=info msg="StopPodSandbox for \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" returns successfully" May 8 00:40:36.692284 containerd[1491]: time="2025-05-08T00:40:36.692265554Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\"" May 8 00:40:36.692365 containerd[1491]: time="2025-05-08T00:40:36.692332954Z" level=info msg="TearDown network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" successfully" May 8 00:40:36.692365 containerd[1491]: time="2025-05-08T00:40:36.692349068Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" returns successfully" May 8 00:40:36.692991 containerd[1491]: time="2025-05-08T00:40:36.692887726Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" May 8 00:40:36.693500 containerd[1491]: time="2025-05-08T00:40:36.693254584Z" level=info msg="TearDown network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" successfully" May 8 00:40:36.693500 containerd[1491]: time="2025-05-08T00:40:36.693297936Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" returns successfully" May 8 00:40:36.693874 systemd[1]: run-netns-cni\x2d7c5dd289\x2d1b03\x2dd4cb\x2d13c0\x2d4b5987be0db5.mount: Deactivated successfully. May 8 00:40:36.694815 containerd[1491]: time="2025-05-08T00:40:36.693999091Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:40:36.694815 containerd[1491]: time="2025-05-08T00:40:36.694062699Z" level=info msg="TearDown network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" successfully" May 8 00:40:36.694815 containerd[1491]: time="2025-05-08T00:40:36.694071772Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" returns successfully" May 8 00:40:36.695783 kubelet[2621]: I0508 00:40:36.695740 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f" May 8 00:40:36.696438 containerd[1491]: time="2025-05-08T00:40:36.696308626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:5,}" May 8 00:40:36.696802 containerd[1491]: time="2025-05-08T00:40:36.696694588Z" level=info msg="StopPodSandbox for \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\"" May 8 00:40:36.697079 containerd[1491]: time="2025-05-08T00:40:36.697007479Z" level=info msg="Ensure that sandbox 0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f in task-service has been cleanup successfully" May 8 00:40:36.698821 containerd[1491]: time="2025-05-08T00:40:36.697173918Z" level=info msg="TearDown network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\" successfully" May 8 00:40:36.698821 containerd[1491]: time="2025-05-08T00:40:36.697185851Z" level=info msg="StopPodSandbox for \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\" returns successfully" May 8 00:40:36.699226 containerd[1491]: time="2025-05-08T00:40:36.699167201Z" level=info msg="StopPodSandbox for \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\"" May 8 00:40:36.699261 containerd[1491]: time="2025-05-08T00:40:36.699238642Z" level=info msg="TearDown network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" successfully" May 8 00:40:36.699261 containerd[1491]: time="2025-05-08T00:40:36.699247815Z" level=info msg="StopPodSandbox for \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" returns successfully" May 8 00:40:36.699995 containerd[1491]: time="2025-05-08T00:40:36.699882580Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\"" May 8 00:40:36.700878 containerd[1491]: time="2025-05-08T00:40:36.700142746Z" level=info msg="TearDown network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" successfully" May 8 00:40:36.700878 containerd[1491]: time="2025-05-08T00:40:36.700163092Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" returns successfully" May 8 00:40:36.700167 systemd[1]: run-netns-cni\x2dd71ccbf6\x2d24eb\x2d7a0a\x2d48e4\x2d1edfa4cfb7e8.mount: Deactivated successfully. May 8 00:40:36.702509 containerd[1491]: time="2025-05-08T00:40:36.702306098Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" May 8 00:40:36.702509 containerd[1491]: time="2025-05-08T00:40:36.702392903Z" level=info msg="TearDown network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" successfully" May 8 00:40:36.702509 containerd[1491]: time="2025-05-08T00:40:36.702402676Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" returns successfully" May 8 00:40:36.703663 containerd[1491]: time="2025-05-08T00:40:36.703589724Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:40:36.704289 containerd[1491]: time="2025-05-08T00:40:36.704265771Z" level=info msg="TearDown network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" successfully" May 8 00:40:36.704428 containerd[1491]: time="2025-05-08T00:40:36.704408623Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" returns successfully" May 8 00:40:36.704743 kubelet[2621]: I0508 00:40:36.704660 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be" May 8 00:40:36.705565 containerd[1491]: time="2025-05-08T00:40:36.705424770Z" level=info msg="StopPodSandbox for \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\"" May 8 00:40:36.705611 kubelet[2621]: E0508 00:40:36.705515 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:36.705864 containerd[1491]: time="2025-05-08T00:40:36.705819475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:5,}" May 8 00:40:36.706176 containerd[1491]: time="2025-05-08T00:40:36.706030417Z" level=info msg="Ensure that sandbox d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be in task-service has been cleanup successfully" May 8 00:40:36.708311 systemd[1]: run-netns-cni\x2d75001399\x2deae3\x2d65cd\x2d0948\x2ddbb411b80bfe.mount: Deactivated successfully. May 8 00:40:36.710164 containerd[1491]: time="2025-05-08T00:40:36.709087940Z" level=info msg="TearDown network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\" successfully" May 8 00:40:36.710164 containerd[1491]: time="2025-05-08T00:40:36.709132543Z" level=info msg="StopPodSandbox for \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\" returns successfully" May 8 00:40:36.710533 containerd[1491]: time="2025-05-08T00:40:36.710203176Z" level=info msg="StopPodSandbox for \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\"" May 8 00:40:36.711229 containerd[1491]: time="2025-05-08T00:40:36.710966279Z" level=info msg="TearDown network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" successfully" May 8 00:40:36.711229 containerd[1491]: time="2025-05-08T00:40:36.711076461Z" level=info msg="StopPodSandbox for \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" returns successfully" May 8 00:40:36.711610 containerd[1491]: time="2025-05-08T00:40:36.711534976Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\"" May 8 00:40:36.711746 containerd[1491]: time="2025-05-08T00:40:36.711682028Z" level=info msg="TearDown network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" successfully" May 8 00:40:36.711746 containerd[1491]: time="2025-05-08T00:40:36.711697063Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" returns successfully" May 8 00:40:36.713991 containerd[1491]: time="2025-05-08T00:40:36.713952812Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" May 8 00:40:36.714072 containerd[1491]: time="2025-05-08T00:40:36.714037977Z" level=info msg="TearDown network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" successfully" May 8 00:40:36.714072 containerd[1491]: time="2025-05-08T00:40:36.714067325Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" returns successfully" May 8 00:40:36.716448 containerd[1491]: time="2025-05-08T00:40:36.716318203Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:40:36.716448 containerd[1491]: time="2025-05-08T00:40:36.716404308Z" level=info msg="TearDown network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" successfully" May 8 00:40:36.716448 containerd[1491]: time="2025-05-08T00:40:36.716414871Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" returns successfully" May 8 00:40:36.720586 containerd[1491]: time="2025-05-08T00:40:36.720454202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:5,}" May 8 00:40:36.723730 kubelet[2621]: E0508 00:40:36.723395 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:36.730573 kubelet[2621]: I0508 00:40:36.730555 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7" May 8 00:40:36.731402 containerd[1491]: time="2025-05-08T00:40:36.731378445Z" level=info msg="StopPodSandbox for \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\"" May 8 00:40:36.731612 containerd[1491]: time="2025-05-08T00:40:36.731595668Z" level=info msg="Ensure that sandbox f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7 in task-service has been cleanup successfully" May 8 00:40:36.732403 containerd[1491]: time="2025-05-08T00:40:36.732374025Z" level=info msg="TearDown network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\" successfully" May 8 00:40:36.732739 containerd[1491]: time="2025-05-08T00:40:36.732722987Z" level=info msg="StopPodSandbox for \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\" returns successfully" May 8 00:40:36.733410 containerd[1491]: time="2025-05-08T00:40:36.733379109Z" level=info msg="StopPodSandbox for \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\"" May 8 00:40:36.734733 containerd[1491]: time="2025-05-08T00:40:36.734714089Z" level=info msg="TearDown network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" successfully" May 8 00:40:36.735851 containerd[1491]: time="2025-05-08T00:40:36.735834557Z" level=info msg="StopPodSandbox for \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" returns successfully" May 8 00:40:36.736145 containerd[1491]: time="2025-05-08T00:40:36.736128943Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\"" May 8 00:40:36.736288 containerd[1491]: time="2025-05-08T00:40:36.736273725Z" level=info msg="TearDown network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" successfully" May 8 00:40:36.736380 containerd[1491]: time="2025-05-08T00:40:36.736367452Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" returns successfully" May 8 00:40:36.736951 containerd[1491]: time="2025-05-08T00:40:36.736933177Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" May 8 00:40:36.737136 containerd[1491]: time="2025-05-08T00:40:36.737120422Z" level=info msg="TearDown network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" successfully" May 8 00:40:36.737229 containerd[1491]: time="2025-05-08T00:40:36.737215470Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" returns successfully" May 8 00:40:36.741555 kubelet[2621]: I0508 00:40:36.741538 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c" May 8 00:40:36.741893 containerd[1491]: time="2025-05-08T00:40:36.740353477Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:40:36.743219 kubelet[2621]: I0508 00:40:36.743168 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qxj2d" podStartSLOduration=1.69112284 podStartE2EDuration="11.743149894s" podCreationTimestamp="2025-05-08 00:40:25 +0000 UTC" firstStartedPulling="2025-05-08 00:40:26.091078935 +0000 UTC m=+11.794663439" lastFinishedPulling="2025-05-08 00:40:36.143106 +0000 UTC m=+21.846690493" observedRunningTime="2025-05-08 00:40:36.739309422 +0000 UTC m=+22.442893915" watchObservedRunningTime="2025-05-08 00:40:36.743149894 +0000 UTC m=+22.446734387" May 8 00:40:36.744216 containerd[1491]: time="2025-05-08T00:40:36.744111896Z" level=info msg="StopPodSandbox for \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\"" May 8 00:40:36.746152 containerd[1491]: time="2025-05-08T00:40:36.746132606Z" level=info msg="TearDown network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" successfully" May 8 00:40:36.747479 containerd[1491]: time="2025-05-08T00:40:36.747460164Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" returns successfully" May 8 00:40:36.747600 containerd[1491]: time="2025-05-08T00:40:36.747275821Z" level=info msg="Ensure that sandbox 97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c in task-service has been cleanup successfully" May 8 00:40:36.748881 containerd[1491]: time="2025-05-08T00:40:36.748862603Z" level=info msg="TearDown network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\" successfully" May 8 00:40:36.748950 containerd[1491]: time="2025-05-08T00:40:36.748935675Z" level=info msg="StopPodSandbox for \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\" returns successfully" May 8 00:40:36.749910 containerd[1491]: time="2025-05-08T00:40:36.749889814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:5,}" May 8 00:40:36.750408 containerd[1491]: time="2025-05-08T00:40:36.750273076Z" level=info msg="StopPodSandbox for \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\"" May 8 00:40:36.750408 containerd[1491]: time="2025-05-08T00:40:36.750344487Z" level=info msg="TearDown network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" successfully" May 8 00:40:36.750408 containerd[1491]: time="2025-05-08T00:40:36.750353780Z" level=info msg="StopPodSandbox for \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" returns successfully" May 8 00:40:36.753089 containerd[1491]: time="2025-05-08T00:40:36.752952379Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\"" May 8 00:40:36.753587 containerd[1491]: time="2025-05-08T00:40:36.753068302Z" level=info msg="TearDown network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" successfully" May 8 00:40:36.754228 containerd[1491]: time="2025-05-08T00:40:36.753750412Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" returns successfully" May 8 00:40:36.754427 kubelet[2621]: I0508 00:40:36.754380 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055" May 8 00:40:36.755555 containerd[1491]: time="2025-05-08T00:40:36.755226673Z" level=info msg="StopPodSandbox for \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\"" May 8 00:40:36.755555 containerd[1491]: time="2025-05-08T00:40:36.755361403Z" level=info msg="Ensure that sandbox dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055 in task-service has been cleanup successfully" May 8 00:40:36.755942 containerd[1491]: time="2025-05-08T00:40:36.755925138Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" May 8 00:40:36.756195 containerd[1491]: time="2025-05-08T00:40:36.756179852Z" level=info msg="TearDown network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" successfully" May 8 00:40:36.756316 containerd[1491]: time="2025-05-08T00:40:36.756281072Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" returns successfully" May 8 00:40:36.757434 containerd[1491]: time="2025-05-08T00:40:36.757417254Z" level=info msg="TearDown network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\" successfully" May 8 00:40:36.757533 containerd[1491]: time="2025-05-08T00:40:36.757518774Z" level=info msg="StopPodSandbox for \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\" returns successfully" May 8 00:40:36.757749 containerd[1491]: time="2025-05-08T00:40:36.757723303Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:40:36.758723 containerd[1491]: time="2025-05-08T00:40:36.758689585Z" level=info msg="TearDown network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" successfully" May 8 00:40:36.759306 containerd[1491]: time="2025-05-08T00:40:36.759289852Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" returns successfully" May 8 00:40:36.759306 containerd[1491]: time="2025-05-08T00:40:36.759201365Z" level=info msg="StopPodSandbox for \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\"" May 8 00:40:36.760264 containerd[1491]: time="2025-05-08T00:40:36.760224254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:5,}" May 8 00:40:36.760574 containerd[1491]: time="2025-05-08T00:40:36.760505436Z" level=info msg="TearDown network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" successfully" May 8 00:40:36.760574 containerd[1491]: time="2025-05-08T00:40:36.760519670Z" level=info msg="StopPodSandbox for \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" returns successfully" May 8 00:40:36.761524 containerd[1491]: time="2025-05-08T00:40:36.761380912Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\"" May 8 00:40:36.761524 containerd[1491]: time="2025-05-08T00:40:36.761461326Z" level=info msg="TearDown network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" successfully" May 8 00:40:36.761524 containerd[1491]: time="2025-05-08T00:40:36.761470629Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" returns successfully" May 8 00:40:36.762095 containerd[1491]: time="2025-05-08T00:40:36.761955860Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" May 8 00:40:36.763995 containerd[1491]: time="2025-05-08T00:40:36.763977851Z" level=info msg="TearDown network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" successfully" May 8 00:40:36.764129 containerd[1491]: time="2025-05-08T00:40:36.764036958Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" returns successfully" May 8 00:40:36.764605 containerd[1491]: time="2025-05-08T00:40:36.764588069Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:40:36.766779 containerd[1491]: time="2025-05-08T00:40:36.766695195Z" level=info msg="TearDown network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" successfully" May 8 00:40:36.766779 containerd[1491]: time="2025-05-08T00:40:36.766712130Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" returns successfully" May 8 00:40:36.767434 kubelet[2621]: E0508 00:40:36.767034 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:36.767587 containerd[1491]: time="2025-05-08T00:40:36.767556307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:5,}" May 8 00:40:37.119665 systemd-networkd[1407]: calib20e18c12d8: Link UP May 8 00:40:37.120041 systemd-networkd[1407]: calib20e18c12d8: Gained carrier May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.872 [INFO][4325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.895 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0 coredns-668d6bf9bc- kube-system 100ec8f8-03d9-440e-8703-cce80bc589fd 718 0 2025-05-08 00:40:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-145-87 coredns-668d6bf9bc-k4bg9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib20e18c12d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.895 [INFO][4325] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.968 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" HandleID="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Workload="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.983 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" HandleID="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Workload="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-145-87", "pod":"coredns-668d6bf9bc-k4bg9", "timestamp":"2025-05-08 00:40:36.968801998 +0000 UTC"}, Hostname:"172-237-145-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.983 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.984 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.984 [INFO][4410] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-87' May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:36.995 [INFO][4410] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.081 [INFO][4410] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.090 [INFO][4410] ipam/ipam.go 489: Trying affinity for 192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.093 [INFO][4410] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.095 [INFO][4410] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.095 [INFO][4410] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.097 [INFO][4410] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5 May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.100 [INFO][4410] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.104 [INFO][4410] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.129/26] block=192.168.105.128/26 handle="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.104 [INFO][4410] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.129/26] handle="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" host="172-237-145-87" May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.104 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.135168 containerd[1491]: 2025-05-08 00:40:37.104 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.129/26] IPv6=[] ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" HandleID="k8s-pod-network.c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Workload="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" May 8 00:40:37.135942 containerd[1491]: 2025-05-08 00:40:37.110 [INFO][4325] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"100ec8f8-03d9-440e-8703-cce80bc589fd", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"", Pod:"coredns-668d6bf9bc-k4bg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib20e18c12d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.135942 containerd[1491]: 2025-05-08 00:40:37.110 [INFO][4325] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.129/32] ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" May 8 00:40:37.135942 containerd[1491]: 2025-05-08 00:40:37.110 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib20e18c12d8 ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" May 8 00:40:37.135942 containerd[1491]: 2025-05-08 00:40:37.119 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" May 8 00:40:37.135942 containerd[1491]: 2025-05-08 00:40:37.120 [INFO][4325] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"100ec8f8-03d9-440e-8703-cce80bc589fd", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5", Pod:"coredns-668d6bf9bc-k4bg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib20e18c12d8", MAC:"86:d7:f4:e4:c9:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.135942 containerd[1491]: 2025-05-08 00:40:37.128 [INFO][4325] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-k4bg9" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--k4bg9-eth0" May 8 00:40:37.158481 containerd[1491]: time="2025-05-08T00:40:37.158413875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:37.158712 containerd[1491]: time="2025-05-08T00:40:37.158678418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:37.158814 containerd[1491]: time="2025-05-08T00:40:37.158786418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.159016 containerd[1491]: time="2025-05-08T00:40:37.158986974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.181279 systemd[1]: Started cri-containerd-c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5.scope - libcontainer container c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5. May 8 00:40:37.215820 systemd-networkd[1407]: cali348a279ac8a: Link UP May 8 00:40:37.216529 systemd-networkd[1407]: cali348a279ac8a: Gained carrier May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:36.831 [INFO][4368] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:36.853 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0 coredns-668d6bf9bc- kube-system 2f1333b6-854b-41fd-b0fb-3eb10c2461e2 712 0 2025-05-08 00:40:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-145-87 coredns-668d6bf9bc-brpjg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali348a279ac8a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:36.853 [INFO][4368] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:36.991 [INFO][4399] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" HandleID="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Workload="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.083 [INFO][4399] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" HandleID="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Workload="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030e340), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-145-87", "pod":"coredns-668d6bf9bc-brpjg", "timestamp":"2025-05-08 00:40:36.991345296 +0000 UTC"}, Hostname:"172-237-145-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.084 [INFO][4399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.105 [INFO][4399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.105 [INFO][4399] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-87' May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.108 [INFO][4399] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.180 [INFO][4399] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.187 [INFO][4399] ipam/ipam.go 489: Trying affinity for 192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.189 [INFO][4399] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.191 [INFO][4399] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.191 [INFO][4399] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.192 [INFO][4399] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.197 [INFO][4399] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.203 [INFO][4399] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.130/26] block=192.168.105.128/26 handle="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.203 [INFO][4399] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.130/26] handle="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" host="172-237-145-87" May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.203 [INFO][4399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.234719 containerd[1491]: 2025-05-08 00:40:37.203 [INFO][4399] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.130/26] IPv6=[] ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" HandleID="k8s-pod-network.2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Workload="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" May 8 00:40:37.235349 containerd[1491]: 2025-05-08 00:40:37.208 [INFO][4368] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f1333b6-854b-41fd-b0fb-3eb10c2461e2", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"", Pod:"coredns-668d6bf9bc-brpjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali348a279ac8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.235349 containerd[1491]: 2025-05-08 00:40:37.208 [INFO][4368] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.130/32] ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" May 8 00:40:37.235349 containerd[1491]: 2025-05-08 00:40:37.208 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali348a279ac8a ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" May 8 00:40:37.235349 containerd[1491]: 2025-05-08 00:40:37.216 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" May 8 00:40:37.235349 containerd[1491]: 2025-05-08 00:40:37.218 [INFO][4368] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f1333b6-854b-41fd-b0fb-3eb10c2461e2", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf", Pod:"coredns-668d6bf9bc-brpjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali348a279ac8a", MAC:"b6:cc:0b:f4:8f:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.235349 containerd[1491]: 2025-05-08 00:40:37.233 [INFO][4368] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-brpjg" WorkloadEndpoint="172--237--145--87-k8s-coredns--668d6bf9bc--brpjg-eth0" May 8 00:40:37.243863 containerd[1491]: time="2025-05-08T00:40:37.243625207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k4bg9,Uid:100ec8f8-03d9-440e-8703-cce80bc589fd,Namespace:kube-system,Attempt:5,} returns sandbox id \"c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5\"" May 8 00:40:37.244828 kubelet[2621]: E0508 00:40:37.244692 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:37.250124 containerd[1491]: time="2025-05-08T00:40:37.250069138Z" level=info msg="CreateContainer within sandbox \"c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:37.265979 containerd[1491]: time="2025-05-08T00:40:37.265928300Z" level=info msg="CreateContainer within sandbox \"c7afe4c4b6099da9d67c89961162215d6d08bca77141c0de1038b95b146a10f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0664e9228c960530e06911eb74ca4203f939936c11c27cd7cd906a99eb32057c\"" May 8 00:40:37.267285 containerd[1491]: time="2025-05-08T00:40:37.265954718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:37.267285 containerd[1491]: time="2025-05-08T00:40:37.266005592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:37.267285 containerd[1491]: time="2025-05-08T00:40:37.266019446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.267285 containerd[1491]: time="2025-05-08T00:40:37.266098018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.268695 containerd[1491]: time="2025-05-08T00:40:37.268675908Z" level=info msg="StartContainer for \"0664e9228c960530e06911eb74ca4203f939936c11c27cd7cd906a99eb32057c\"" May 8 00:40:37.295985 systemd[1]: Started cri-containerd-2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf.scope - libcontainer container 2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf. May 8 00:40:37.320499 systemd[1]: Started cri-containerd-0664e9228c960530e06911eb74ca4203f939936c11c27cd7cd906a99eb32057c.scope - libcontainer container 0664e9228c960530e06911eb74ca4203f939936c11c27cd7cd906a99eb32057c. May 8 00:40:37.339191 systemd-networkd[1407]: cali1a6e4935d2e: Link UP May 8 00:40:37.342032 systemd-networkd[1407]: cali1a6e4935d2e: Gained carrier May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:36.883 [INFO][4340] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:36.922 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--87-k8s-csi--node--driver--zddgj-eth0 csi-node-driver- calico-system 4a17ca9d-8804-4e7c-a9df-ca043ad979cf 638 0 2025-05-08 00:40:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-145-87 csi-node-driver-zddgj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a6e4935d2e [] []}} ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:36.922 [INFO][4340] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.031 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" HandleID="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Workload="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.087 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" HandleID="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Workload="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000432290), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-87", "pod":"csi-node-driver-zddgj", "timestamp":"2025-05-08 00:40:37.031086242 +0000 UTC"}, Hostname:"172-237-145-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.088 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.203 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.203 [INFO][4420] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-87' May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.209 [INFO][4420] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.284 [INFO][4420] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.294 [INFO][4420] ipam/ipam.go 489: Trying affinity for 192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.298 [INFO][4420] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.302 [INFO][4420] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.302 [INFO][4420] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.306 [INFO][4420] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.312 [INFO][4420] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.322 [INFO][4420] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.131/26] block=192.168.105.128/26 handle="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.322 [INFO][4420] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.131/26] handle="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" host="172-237-145-87" May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.323 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.375503 containerd[1491]: 2025-05-08 00:40:37.323 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.131/26] IPv6=[] ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" HandleID="k8s-pod-network.24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Workload="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" May 8 00:40:37.376035 containerd[1491]: 2025-05-08 00:40:37.330 [INFO][4340] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-csi--node--driver--zddgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a17ca9d-8804-4e7c-a9df-ca043ad979cf", ResourceVersion:"638", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"", Pod:"csi-node-driver-zddgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a6e4935d2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.376035 containerd[1491]: 2025-05-08 00:40:37.330 [INFO][4340] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.131/32] ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" May 8 00:40:37.376035 containerd[1491]: 2025-05-08 00:40:37.330 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a6e4935d2e ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" May 8 00:40:37.376035 containerd[1491]: 2025-05-08 00:40:37.343 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" May 8 00:40:37.376035 containerd[1491]: 2025-05-08 00:40:37.346 [INFO][4340] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-csi--node--driver--zddgj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a17ca9d-8804-4e7c-a9df-ca043ad979cf", ResourceVersion:"638", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e", Pod:"csi-node-driver-zddgj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a6e4935d2e", MAC:"d6:3d:f6:eb:26:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.376035 containerd[1491]: 2025-05-08 00:40:37.368 [INFO][4340] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e" Namespace="calico-system" Pod="csi-node-driver-zddgj" WorkloadEndpoint="172--237--145--87-k8s-csi--node--driver--zddgj-eth0" May 8 00:40:37.393889 containerd[1491]: time="2025-05-08T00:40:37.392272717Z" level=info msg="StartContainer for \"0664e9228c960530e06911eb74ca4203f939936c11c27cd7cd906a99eb32057c\" returns successfully" May 8 00:40:37.414498 containerd[1491]: time="2025-05-08T00:40:37.414449685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brpjg,Uid:2f1333b6-854b-41fd-b0fb-3eb10c2461e2,Namespace:kube-system,Attempt:5,} returns sandbox id \"2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf\"" May 8 00:40:37.415403 kubelet[2621]: E0508 00:40:37.415366 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:37.421677 containerd[1491]: time="2025-05-08T00:40:37.421639713Z" level=info msg="CreateContainer within sandbox \"2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:37.451872 containerd[1491]: time="2025-05-08T00:40:37.450422848Z" level=info msg="CreateContainer within sandbox \"2fbb69db0c605515d07cba2264ae2f11aebd9bac7de5e173e838bae57879b3bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc80730069939e70089b2436452224ce342e75d8df1782490902db5e7c57abc0\"" May 8 00:40:37.451961 containerd[1491]: time="2025-05-08T00:40:37.451926467Z" level=info msg="StartContainer for \"fc80730069939e70089b2436452224ce342e75d8df1782490902db5e7c57abc0\"" May 8 00:40:37.472844 systemd-networkd[1407]: cali9dd4991372d: Link UP May 8 00:40:37.475542 systemd-networkd[1407]: cali9dd4991372d: Gained carrier May 8 00:40:37.482234 containerd[1491]: time="2025-05-08T00:40:37.482121316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:36.914 [INFO][4369] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:36.959 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0 calico-apiserver-8467bd5dfd- calico-apiserver f9d2fedb-163e-44bf-9fb8-248856ed9ef7 717 0 2025-05-08 00:40:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8467bd5dfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-145-87 calico-apiserver-8467bd5dfd-5dthz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9dd4991372d [] []}} ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:36.959 [INFO][4369] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.047 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" HandleID="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Workload="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.088 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" HandleID="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Workload="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002603c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-145-87", "pod":"calico-apiserver-8467bd5dfd-5dthz", "timestamp":"2025-05-08 00:40:37.047964829 +0000 UTC"}, Hostname:"172-237-145-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.088 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.323 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.323 [INFO][4429] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-87' May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.326 [INFO][4429] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.382 [INFO][4429] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.403 [INFO][4429] ipam/ipam.go 489: Trying affinity for 192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.407 [INFO][4429] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.414 [INFO][4429] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.414 [INFO][4429] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.419 [INFO][4429] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.429 [INFO][4429] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.442 [INFO][4429] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.132/26] block=192.168.105.128/26 handle="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.442 [INFO][4429] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.132/26] handle="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" host="172-237-145-87" May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.442 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.487378 containerd[1491]: 2025-05-08 00:40:37.442 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.132/26] IPv6=[] ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" HandleID="k8s-pod-network.494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Workload="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" May 8 00:40:37.488173 containerd[1491]: 2025-05-08 00:40:37.448 [INFO][4369] cni-plugin/k8s.go 386: Populated endpoint ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0", GenerateName:"calico-apiserver-8467bd5dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9d2fedb-163e-44bf-9fb8-248856ed9ef7", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8467bd5dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"", Pod:"calico-apiserver-8467bd5dfd-5dthz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dd4991372d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.488173 containerd[1491]: 2025-05-08 00:40:37.448 [INFO][4369] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.132/32] ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" May 8 00:40:37.488173 containerd[1491]: 2025-05-08 00:40:37.448 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dd4991372d ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" May 8 00:40:37.488173 containerd[1491]: 2025-05-08 00:40:37.472 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" May 8 00:40:37.488173 containerd[1491]: 2025-05-08 00:40:37.472 [INFO][4369] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0", GenerateName:"calico-apiserver-8467bd5dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9d2fedb-163e-44bf-9fb8-248856ed9ef7", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8467bd5dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db", Pod:"calico-apiserver-8467bd5dfd-5dthz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dd4991372d", MAC:"8e:9b:06:77:38:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.488173 containerd[1491]: 2025-05-08 00:40:37.482 [INFO][4369] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-5dthz" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--5dthz-eth0" May 8 00:40:37.490210 containerd[1491]: time="2025-05-08T00:40:37.489500868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:37.490210 containerd[1491]: time="2025-05-08T00:40:37.489835981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.490674 containerd[1491]: time="2025-05-08T00:40:37.490188550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.521920 systemd[1]: Started cri-containerd-24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e.scope - libcontainer container 24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e. May 8 00:40:37.543974 systemd-networkd[1407]: calic4c3a0c0ce9: Link UP May 8 00:40:37.546452 systemd-networkd[1407]: calic4c3a0c0ce9: Gained carrier May 8 00:40:37.562904 systemd[1]: Started cri-containerd-fc80730069939e70089b2436452224ce342e75d8df1782490902db5e7c57abc0.scope - libcontainer container fc80730069939e70089b2436452224ce342e75d8df1782490902db5e7c57abc0. May 8 00:40:37.583087 containerd[1491]: time="2025-05-08T00:40:37.582750157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:37.586364 containerd[1491]: time="2025-05-08T00:40:37.586260288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:37.591997 containerd[1491]: time="2025-05-08T00:40:37.590824843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.591997 containerd[1491]: time="2025-05-08T00:40:37.590933914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.593828 systemd[1]: run-netns-cni\x2d58ae0f7b\x2d00b0\x2d4d33\x2df735\x2da8c58659ed8d.mount: Deactivated successfully. May 8 00:40:37.593928 systemd[1]: run-netns-cni\x2dc4eadce4\x2d3de8\x2dc2ef\x2da84d\x2dd25bf7ee1f6a.mount: Deactivated successfully. May 8 00:40:37.594003 systemd[1]: run-netns-cni\x2d8ae023cd\x2df9da\x2ddc45\x2d7419\x2d6c27e0a7c0b6.mount: Deactivated successfully. May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:36.940 [INFO][4352] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:36.987 [INFO][4352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0 calico-apiserver-8467bd5dfd- calico-apiserver 5b22028e-782d-431d-8b78-71ebd2d0dd5e 716 0 2025-05-08 00:40:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8467bd5dfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-145-87 calico-apiserver-8467bd5dfd-82nqn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic4c3a0c0ce9 [] []}} ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:36.987 [INFO][4352] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.053 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" HandleID="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Workload="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.089 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" HandleID="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Workload="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-145-87", "pod":"calico-apiserver-8467bd5dfd-82nqn", "timestamp":"2025-05-08 00:40:37.053685218 +0000 UTC"}, Hostname:"172-237-145-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.090 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.442 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.443 [INFO][4434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-87' May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.459 [INFO][4434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.489 [INFO][4434] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.495 [INFO][4434] ipam/ipam.go 489: Trying affinity for 192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.497 [INFO][4434] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.501 [INFO][4434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.501 [INFO][4434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.503 [INFO][4434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.508 [INFO][4434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.516 [INFO][4434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.133/26] block=192.168.105.128/26 handle="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.516 [INFO][4434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.133/26] handle="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" host="172-237-145-87" May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.516 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.594721 containerd[1491]: 2025-05-08 00:40:37.516 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.133/26] IPv6=[] ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" HandleID="k8s-pod-network.cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Workload="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" May 8 00:40:37.595737 containerd[1491]: 2025-05-08 00:40:37.526 [INFO][4352] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0", GenerateName:"calico-apiserver-8467bd5dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b22028e-782d-431d-8b78-71ebd2d0dd5e", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8467bd5dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"", Pod:"calico-apiserver-8467bd5dfd-82nqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4c3a0c0ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.595737 containerd[1491]: 2025-05-08 00:40:37.528 [INFO][4352] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.133/32] ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" May 8 00:40:37.595737 containerd[1491]: 2025-05-08 00:40:37.528 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4c3a0c0ce9 ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" May 8 00:40:37.595737 containerd[1491]: 2025-05-08 00:40:37.547 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" May 8 00:40:37.595737 containerd[1491]: 2025-05-08 00:40:37.550 [INFO][4352] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0", GenerateName:"calico-apiserver-8467bd5dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b22028e-782d-431d-8b78-71ebd2d0dd5e", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8467bd5dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff", Pod:"calico-apiserver-8467bd5dfd-82nqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4c3a0c0ce9", MAC:"ea:ef:21:bf:a3:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.595737 containerd[1491]: 2025-05-08 00:40:37.585 [INFO][4352] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff" Namespace="calico-apiserver" Pod="calico-apiserver-8467bd5dfd-82nqn" WorkloadEndpoint="172--237--145--87-k8s-calico--apiserver--8467bd5dfd--82nqn-eth0" May 8 00:40:37.644934 systemd[1]: Started cri-containerd-494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db.scope - libcontainer container 494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db. May 8 00:40:37.667106 containerd[1491]: time="2025-05-08T00:40:37.667076162Z" level=info msg="StartContainer for \"fc80730069939e70089b2436452224ce342e75d8df1782490902db5e7c57abc0\" returns successfully" May 8 00:40:37.680917 systemd-networkd[1407]: calic5a41004ce6: Link UP May 8 00:40:37.681179 systemd-networkd[1407]: calic5a41004ce6: Gained carrier May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:36.784 [INFO][4310] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:36.850 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0 calico-kube-controllers-64dc6f8966- calico-system e722fa22-1636-4de9-a89f-f2a8e31c4ced 719 0 2025-05-08 00:40:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64dc6f8966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-145-87 calico-kube-controllers-64dc6f8966-xqfsx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic5a41004ce6 [] []}} ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:36.855 [INFO][4310] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.006 [INFO][4396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" HandleID="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Workload="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.089 [INFO][4396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" HandleID="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Workload="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c5aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-87", "pod":"calico-kube-controllers-64dc6f8966-xqfsx", "timestamp":"2025-05-08 00:40:37.006195907 +0000 UTC"}, Hostname:"172-237-145-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.090 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.516 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.516 [INFO][4396] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-87' May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.557 [INFO][4396] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.593 [INFO][4396] ipam/ipam.go 372: Looking up existing affinities for host host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.614 [INFO][4396] ipam/ipam.go 489: Trying affinity for 192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.620 [INFO][4396] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.623 [INFO][4396] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.623 [INFO][4396] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.632 [INFO][4396] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55 May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.645 [INFO][4396] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.656 [INFO][4396] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.134/26] block=192.168.105.128/26 handle="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.656 [INFO][4396] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.134/26] handle="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" host="172-237-145-87" May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.656 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:37.708994 containerd[1491]: 2025-05-08 00:40:37.656 [INFO][4396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.134/26] IPv6=[] ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" HandleID="k8s-pod-network.c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Workload="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" May 8 00:40:37.710935 containerd[1491]: 2025-05-08 00:40:37.672 [INFO][4310] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0", GenerateName:"calico-kube-controllers-64dc6f8966-", Namespace:"calico-system", SelfLink:"", UID:"e722fa22-1636-4de9-a89f-f2a8e31c4ced", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dc6f8966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"", Pod:"calico-kube-controllers-64dc6f8966-xqfsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5a41004ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.710935 containerd[1491]: 2025-05-08 00:40:37.673 [INFO][4310] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.134/32] ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" May 8 00:40:37.710935 containerd[1491]: 2025-05-08 00:40:37.673 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5a41004ce6 ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" May 8 00:40:37.710935 containerd[1491]: 2025-05-08 00:40:37.677 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" May 8 00:40:37.710935 containerd[1491]: 2025-05-08 00:40:37.678 [INFO][4310] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0", GenerateName:"calico-kube-controllers-64dc6f8966-", Namespace:"calico-system", SelfLink:"", UID:"e722fa22-1636-4de9-a89f-f2a8e31c4ced", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dc6f8966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-87", ContainerID:"c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55", Pod:"calico-kube-controllers-64dc6f8966-xqfsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5a41004ce6", MAC:"c2:0f:59:e4:7c:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:37.710935 containerd[1491]: 2025-05-08 00:40:37.699 [INFO][4310] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55" Namespace="calico-system" Pod="calico-kube-controllers-64dc6f8966-xqfsx" WorkloadEndpoint="172--237--145--87-k8s-calico--kube--controllers--64dc6f8966--xqfsx-eth0" May 8 00:40:37.719434 containerd[1491]: time="2025-05-08T00:40:37.718687856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:37.719434 containerd[1491]: time="2025-05-08T00:40:37.718921231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:37.719434 containerd[1491]: time="2025-05-08T00:40:37.718938125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.726206 containerd[1491]: time="2025-05-08T00:40:37.723115093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.767538 kubelet[2621]: E0508 00:40:37.767515 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:37.782512 kubelet[2621]: E0508 00:40:37.780537 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:37.785691 kubelet[2621]: I0508 00:40:37.784420 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k4bg9" podStartSLOduration=18.784406401 podStartE2EDuration="18.784406401s" podCreationTimestamp="2025-05-08 00:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:37.784108837 +0000 UTC m=+23.487693330" watchObservedRunningTime="2025-05-08 00:40:37.784406401 +0000 UTC m=+23.487990904" May 8 00:40:37.787285 kubelet[2621]: E0508 00:40:37.785106 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:37.791111 containerd[1491]: time="2025-05-08T00:40:37.790872008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zddgj,Uid:4a17ca9d-8804-4e7c-a9df-ca043ad979cf,Namespace:calico-system,Attempt:5,} returns sandbox id \"24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e\"" May 8 00:40:37.801039 containerd[1491]: time="2025-05-08T00:40:37.800934860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:40:37.832922 systemd[1]: Started cri-containerd-cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff.scope - libcontainer container cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff. May 8 00:40:37.846333 containerd[1491]: time="2025-05-08T00:40:37.846198368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:37.846333 containerd[1491]: time="2025-05-08T00:40:37.846274549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:37.846945 containerd[1491]: time="2025-05-08T00:40:37.846289964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.846945 containerd[1491]: time="2025-05-08T00:40:37.846377959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:37.922116 systemd[1]: Started cri-containerd-c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55.scope - libcontainer container c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55. May 8 00:40:37.959427 containerd[1491]: time="2025-05-08T00:40:37.959335335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-5dthz,Uid:f9d2fedb-163e-44bf-9fb8-248856ed9ef7,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db\"" May 8 00:40:38.127280 containerd[1491]: time="2025-05-08T00:40:38.127210725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dc6f8966-xqfsx,Uid:e722fa22-1636-4de9-a89f-f2a8e31c4ced,Namespace:calico-system,Attempt:5,} returns sandbox id \"c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55\"" May 8 00:40:38.137306 containerd[1491]: time="2025-05-08T00:40:38.137236727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8467bd5dfd-82nqn,Uid:5b22028e-782d-431d-8b78-71ebd2d0dd5e,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff\"" May 8 00:40:38.697983 systemd-networkd[1407]: calib20e18c12d8: Gained IPv6LL May 8 00:40:38.761902 systemd-networkd[1407]: cali9dd4991372d: Gained IPv6LL May 8 00:40:38.793705 kubelet[2621]: E0508 00:40:38.793474 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:38.793705 kubelet[2621]: E0508 00:40:38.793621 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:38.826146 systemd-networkd[1407]: cali1a6e4935d2e: Gained IPv6LL May 8 00:40:39.082020 systemd-networkd[1407]: cali348a279ac8a: Gained IPv6LL May 8 00:40:39.211239 systemd-networkd[1407]: calic5a41004ce6: Gained IPv6LL May 8 00:40:39.355926 containerd[1491]: time="2025-05-08T00:40:39.355833107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:39.357297 containerd[1491]: time="2025-05-08T00:40:39.357264133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:40:39.358746 containerd[1491]: time="2025-05-08T00:40:39.357924623Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:39.359828 containerd[1491]: time="2025-05-08T00:40:39.359786189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:39.361385 containerd[1491]: time="2025-05-08T00:40:39.361270339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.560044179s" May 8 00:40:39.361462 containerd[1491]: time="2025-05-08T00:40:39.361447325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:40:39.363726 containerd[1491]: time="2025-05-08T00:40:39.363691820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:39.364246 containerd[1491]: time="2025-05-08T00:40:39.364220106Z" level=info msg="CreateContainer within sandbox \"24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:40:39.382543 containerd[1491]: time="2025-05-08T00:40:39.381456672Z" level=info msg="CreateContainer within sandbox \"24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6dabbf5c3ef6c1c2dcbc67a9b4c982a207ce5a80e74d390e5710b2464254d689\"" May 8 00:40:39.383026 containerd[1491]: time="2025-05-08T00:40:39.382981073Z" level=info msg="StartContainer for \"6dabbf5c3ef6c1c2dcbc67a9b4c982a207ce5a80e74d390e5710b2464254d689\"" May 8 00:40:39.417883 systemd[1]: Started cri-containerd-6dabbf5c3ef6c1c2dcbc67a9b4c982a207ce5a80e74d390e5710b2464254d689.scope - libcontainer container 6dabbf5c3ef6c1c2dcbc67a9b4c982a207ce5a80e74d390e5710b2464254d689. May 8 00:40:39.453677 containerd[1491]: time="2025-05-08T00:40:39.451550142Z" level=info msg="StartContainer for \"6dabbf5c3ef6c1c2dcbc67a9b4c982a207ce5a80e74d390e5710b2464254d689\" returns successfully" May 8 00:40:39.466012 systemd-networkd[1407]: calic4c3a0c0ce9: Gained IPv6LL May 8 00:40:39.799875 kubelet[2621]: E0508 00:40:39.799848 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:39.800785 kubelet[2621]: E0508 00:40:39.800686 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:40.771442 containerd[1491]: time="2025-05-08T00:40:40.771398549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:40.772283 containerd[1491]: time="2025-05-08T00:40:40.772161296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:40:40.773595 containerd[1491]: time="2025-05-08T00:40:40.772773496Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:40.774512 containerd[1491]: time="2025-05-08T00:40:40.774481966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:40.775188 containerd[1491]: time="2025-05-08T00:40:40.775154441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 1.411135917s" May 8 00:40:40.775268 containerd[1491]: time="2025-05-08T00:40:40.775189110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:40.776948 containerd[1491]: time="2025-05-08T00:40:40.776914973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:40:40.778923 containerd[1491]: time="2025-05-08T00:40:40.778890979Z" level=info msg="CreateContainer within sandbox \"494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:40.798531 containerd[1491]: time="2025-05-08T00:40:40.798504078Z" level=info msg="CreateContainer within sandbox \"494104ff59af5439b884d25c2d0cd8d3d4e61d97f47665c1b791d030820e22db\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c158770ea923a11005de4ede0bc5e8142b639ce63bbb85783192d10d0a448d0\"" May 8 00:40:40.799019 containerd[1491]: time="2025-05-08T00:40:40.798957679Z" level=info msg="StartContainer for \"1c158770ea923a11005de4ede0bc5e8142b639ce63bbb85783192d10d0a448d0\"" May 8 00:40:40.841898 systemd[1]: Started cri-containerd-1c158770ea923a11005de4ede0bc5e8142b639ce63bbb85783192d10d0a448d0.scope - libcontainer container 1c158770ea923a11005de4ede0bc5e8142b639ce63bbb85783192d10d0a448d0. May 8 00:40:40.885622 containerd[1491]: time="2025-05-08T00:40:40.885522909Z" level=info msg="StartContainer for \"1c158770ea923a11005de4ede0bc5e8142b639ce63bbb85783192d10d0a448d0\" returns successfully" May 8 00:40:41.837296 kubelet[2621]: I0508 00:40:41.837239 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-brpjg" podStartSLOduration=22.837223032 podStartE2EDuration="22.837223032s" podCreationTimestamp="2025-05-08 00:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:37.849514545 +0000 UTC m=+23.553099038" watchObservedRunningTime="2025-05-08 00:40:41.837223032 +0000 UTC m=+27.540807525" May 8 00:40:42.829504 kubelet[2621]: I0508 00:40:42.829476 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:43.083456 containerd[1491]: time="2025-05-08T00:40:43.083299303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:43.084239 containerd[1491]: time="2025-05-08T00:40:43.084002916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:40:43.085786 containerd[1491]: time="2025-05-08T00:40:43.084680763Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:43.086717 containerd[1491]: time="2025-05-08T00:40:43.086683630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:43.087481 containerd[1491]: time="2025-05-08T00:40:43.087452058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.310506307s" May 8 00:40:43.087556 containerd[1491]: time="2025-05-08T00:40:43.087540707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:40:43.088895 containerd[1491]: time="2025-05-08T00:40:43.088877328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:43.101613 containerd[1491]: time="2025-05-08T00:40:43.101590688Z" level=info msg="CreateContainer within sandbox \"c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:40:43.113036 containerd[1491]: time="2025-05-08T00:40:43.113006665Z" level=info msg="CreateContainer within sandbox \"c62222a4be478afe7e34a61be0332c1667d8e743ab70a5a492afb0cbe0e9fc55\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8\"" May 8 00:40:43.113984 containerd[1491]: time="2025-05-08T00:40:43.113935138Z" level=info msg="StartContainer for \"4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8\"" May 8 00:40:43.151891 systemd[1]: Started cri-containerd-4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8.scope - libcontainer container 4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8. May 8 00:40:43.190963 containerd[1491]: time="2025-05-08T00:40:43.190917531Z" level=info msg="StartContainer for \"4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8\" returns successfully" May 8 00:40:43.257957 containerd[1491]: time="2025-05-08T00:40:43.257848485Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:43.258672 containerd[1491]: time="2025-05-08T00:40:43.258632346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:40:43.262596 containerd[1491]: time="2025-05-08T00:40:43.261704565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 172.731217ms" May 8 00:40:43.262596 containerd[1491]: time="2025-05-08T00:40:43.262464591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:43.264796 containerd[1491]: time="2025-05-08T00:40:43.264544324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:40:43.265648 containerd[1491]: time="2025-05-08T00:40:43.265582020Z" level=info msg="CreateContainer within sandbox \"cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:43.273127 containerd[1491]: time="2025-05-08T00:40:43.273099017Z" level=info msg="CreateContainer within sandbox \"cf6c3eb8cbaf35698a2d2270fbcc9b7b25cbd6fafb5043588b389d276ecde8ff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5a6e38dac252509c6af28f460801a1ab4e2e29b3dda496ab5d5b4f6d2f881926\"" May 8 00:40:43.273859 containerd[1491]: time="2025-05-08T00:40:43.273815124Z" level=info msg="StartContainer for \"5a6e38dac252509c6af28f460801a1ab4e2e29b3dda496ab5d5b4f6d2f881926\"" May 8 00:40:43.298883 systemd[1]: Started cri-containerd-5a6e38dac252509c6af28f460801a1ab4e2e29b3dda496ab5d5b4f6d2f881926.scope - libcontainer container 5a6e38dac252509c6af28f460801a1ab4e2e29b3dda496ab5d5b4f6d2f881926. May 8 00:40:43.349024 containerd[1491]: time="2025-05-08T00:40:43.348735528Z" level=info msg="StartContainer for \"5a6e38dac252509c6af28f460801a1ab4e2e29b3dda496ab5d5b4f6d2f881926\" returns successfully" May 8 00:40:43.861211 kubelet[2621]: I0508 00:40:43.859563 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8467bd5dfd-5dthz" podStartSLOduration=16.044710069 podStartE2EDuration="18.859549159s" podCreationTimestamp="2025-05-08 00:40:25 +0000 UTC" firstStartedPulling="2025-05-08 00:40:37.961263744 +0000 UTC m=+23.664848237" lastFinishedPulling="2025-05-08 00:40:40.776102834 +0000 UTC m=+26.479687327" observedRunningTime="2025-05-08 00:40:41.838286274 +0000 UTC m=+27.541870777" watchObservedRunningTime="2025-05-08 00:40:43.859549159 +0000 UTC m=+29.563133662" May 8 00:40:43.876224 kubelet[2621]: I0508 00:40:43.875648 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8467bd5dfd-82nqn" podStartSLOduration=13.751357908 podStartE2EDuration="18.875630512s" podCreationTimestamp="2025-05-08 00:40:25 +0000 UTC" firstStartedPulling="2025-05-08 00:40:38.139073508 +0000 UTC m=+23.842658001" lastFinishedPulling="2025-05-08 00:40:43.263346112 +0000 UTC m=+28.966930605" observedRunningTime="2025-05-08 00:40:43.859839772 +0000 UTC m=+29.563424265" watchObservedRunningTime="2025-05-08 00:40:43.875630512 +0000 UTC m=+29.579215015" May 8 00:40:44.389313 containerd[1491]: time="2025-05-08T00:40:44.389277900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:44.390319 containerd[1491]: time="2025-05-08T00:40:44.390276879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:40:44.390822 containerd[1491]: time="2025-05-08T00:40:44.390467429Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:44.392304 containerd[1491]: time="2025-05-08T00:40:44.392271627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:44.393041 containerd[1491]: time="2025-05-08T00:40:44.393009202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.128421559s" May 8 00:40:44.393140 containerd[1491]: time="2025-05-08T00:40:44.393111314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:40:44.397634 containerd[1491]: time="2025-05-08T00:40:44.397555005Z" level=info msg="CreateContainer within sandbox \"24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:40:44.410445 containerd[1491]: time="2025-05-08T00:40:44.410413692Z" level=info msg="CreateContainer within sandbox \"24997d3e96ed863a84399ae33eb0fb9a9033dc1a37e0789593ad3e834698357e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"45599a359ade1016838f6745d60ed509c61a9abd9902ed63df618e85edd6099f\"" May 8 00:40:44.410920 containerd[1491]: time="2025-05-08T00:40:44.410819458Z" level=info msg="StartContainer for \"45599a359ade1016838f6745d60ed509c61a9abd9902ed63df618e85edd6099f\"" May 8 00:40:44.444968 systemd[1]: Started cri-containerd-45599a359ade1016838f6745d60ed509c61a9abd9902ed63df618e85edd6099f.scope - libcontainer container 45599a359ade1016838f6745d60ed509c61a9abd9902ed63df618e85edd6099f. May 8 00:40:44.489837 containerd[1491]: time="2025-05-08T00:40:44.489235826Z" level=info msg="StartContainer for \"45599a359ade1016838f6745d60ed509c61a9abd9902ed63df618e85edd6099f\" returns successfully" May 8 00:40:44.854112 kubelet[2621]: I0508 00:40:44.853141 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:44.854112 kubelet[2621]: I0508 00:40:44.853194 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:44.862323 kubelet[2621]: I0508 00:40:44.861826 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64dc6f8966-xqfsx" podStartSLOduration=14.903301264 podStartE2EDuration="19.861808164s" podCreationTimestamp="2025-05-08 00:40:25 +0000 UTC" firstStartedPulling="2025-05-08 00:40:38.129712154 +0000 UTC m=+23.833296647" lastFinishedPulling="2025-05-08 00:40:43.088219054 +0000 UTC m=+28.791803547" observedRunningTime="2025-05-08 00:40:43.876170881 +0000 UTC m=+29.579755374" watchObservedRunningTime="2025-05-08 00:40:44.861808164 +0000 UTC m=+30.565392667" May 8 00:40:44.862323 kubelet[2621]: I0508 00:40:44.862057 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zddgj" podStartSLOduration=13.267205318 podStartE2EDuration="19.862052495s" podCreationTimestamp="2025-05-08 00:40:25 +0000 UTC" firstStartedPulling="2025-05-08 00:40:37.800184069 +0000 UTC m=+23.503768572" lastFinishedPulling="2025-05-08 00:40:44.395031256 +0000 UTC m=+30.098615749" observedRunningTime="2025-05-08 00:40:44.861562862 +0000 UTC m=+30.565147355" watchObservedRunningTime="2025-05-08 00:40:44.862052495 +0000 UTC m=+30.565636988" May 8 00:40:45.464232 kubelet[2621]: I0508 00:40:45.464167 2621 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:40:45.464232 kubelet[2621]: I0508 00:40:45.464242 2621 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:40:48.327554 kubelet[2621]: I0508 00:40:48.327216 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:49.241969 systemd[1]: Started sshd@7-172.237.145.87:22-85.208.84.4:29660.service - OpenSSH per-connection server daemon (85.208.84.4:29660). May 8 00:40:49.909712 kubelet[2621]: I0508 00:40:49.909666 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:50.032802 sshd[5387]: Invalid user user from 85.208.84.4 port 29660 May 8 00:40:50.183691 sshd[5387]: Connection closed by invalid user user 85.208.84.4 port 29660 [preauth] May 8 00:40:50.185161 systemd[1]: sshd@7-172.237.145.87:22-85.208.84.4:29660.service: Deactivated successfully. May 8 00:40:54.103645 kubelet[2621]: I0508 00:40:54.103426 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:54.106170 kubelet[2621]: E0508 00:40:54.106137 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:54.657804 kernel: bpftool[5562]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:40:54.879569 kubelet[2621]: E0508 00:40:54.878698 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:40:54.882276 systemd-networkd[1407]: vxlan.calico: Link UP May 8 00:40:54.882285 systemd-networkd[1407]: vxlan.calico: Gained carrier May 8 00:40:55.554224 kubelet[2621]: I0508 00:40:55.554198 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:56.298718 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL May 8 00:41:07.845801 kubelet[2621]: E0508 00:41:07.845503 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:14.385435 containerd[1491]: time="2025-05-08T00:41:14.385083522Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:41:14.385435 containerd[1491]: time="2025-05-08T00:41:14.385220706Z" level=info msg="TearDown network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" successfully" May 8 00:41:14.385435 containerd[1491]: time="2025-05-08T00:41:14.385232407Z" level=info msg="StopPodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" returns successfully" May 8 00:41:14.385949 containerd[1491]: time="2025-05-08T00:41:14.385508027Z" level=info msg="RemovePodSandbox for \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:41:14.385949 containerd[1491]: time="2025-05-08T00:41:14.385526729Z" level=info msg="Forcibly stopping sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\"" May 8 00:41:14.385949 containerd[1491]: time="2025-05-08T00:41:14.385584245Z" level=info msg="TearDown network for sandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" successfully" May 8 00:41:14.389503 containerd[1491]: time="2025-05-08T00:41:14.389171152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.389503 containerd[1491]: time="2025-05-08T00:41:14.389214106Z" level=info msg="RemovePodSandbox \"747e687a11d7a5392517f94be8b0b22b68db69265446a084d3d667e14af3dfc9\" returns successfully" May 8 00:41:14.389687 containerd[1491]: time="2025-05-08T00:41:14.389652083Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" May 8 00:41:14.389742 containerd[1491]: time="2025-05-08T00:41:14.389724111Z" level=info msg="TearDown network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" successfully" May 8 00:41:14.389742 containerd[1491]: time="2025-05-08T00:41:14.389734132Z" level=info msg="StopPodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" returns successfully" May 8 00:41:14.391814 containerd[1491]: time="2025-05-08T00:41:14.390010021Z" level=info msg="RemovePodSandbox for \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" May 8 00:41:14.391814 containerd[1491]: time="2025-05-08T00:41:14.390057006Z" level=info msg="Forcibly stopping sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\"" May 8 00:41:14.391814 containerd[1491]: time="2025-05-08T00:41:14.390138454Z" level=info msg="TearDown network for sandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" successfully" May 8 00:41:14.393113 containerd[1491]: time="2025-05-08T00:41:14.393067392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.393113 containerd[1491]: time="2025-05-08T00:41:14.393110987Z" level=info msg="RemovePodSandbox \"0752e2e4c1fc3c96cdabd0d8ccfe3fd2a830515bdf5ffa83b6e6b1cd12a2dacb\" returns successfully" May 8 00:41:14.396637 containerd[1491]: time="2025-05-08T00:41:14.395072483Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\"" May 8 00:41:14.396637 containerd[1491]: time="2025-05-08T00:41:14.395161813Z" level=info msg="TearDown network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" successfully" May 8 00:41:14.396637 containerd[1491]: time="2025-05-08T00:41:14.395172544Z" level=info msg="StopPodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" returns successfully" May 8 00:41:14.396637 containerd[1491]: time="2025-05-08T00:41:14.395355193Z" level=info msg="RemovePodSandbox for \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\"" May 8 00:41:14.396637 containerd[1491]: time="2025-05-08T00:41:14.395370465Z" level=info msg="Forcibly stopping sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\"" May 8 00:41:14.396637 containerd[1491]: time="2025-05-08T00:41:14.395427071Z" level=info msg="TearDown network for sandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" successfully" May 8 00:41:14.398358 containerd[1491]: time="2025-05-08T00:41:14.398247448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.398453 containerd[1491]: time="2025-05-08T00:41:14.398419406Z" level=info msg="RemovePodSandbox \"05e02e9b9d7de79e63db0a7700ab920c7215fa70a6b3ee221c18c7455000091c\" returns successfully" May 8 00:41:14.398748 containerd[1491]: time="2025-05-08T00:41:14.398716127Z" level=info msg="StopPodSandbox for \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\"" May 8 00:41:14.398924 containerd[1491]: time="2025-05-08T00:41:14.398868383Z" level=info msg="TearDown network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" successfully" May 8 00:41:14.398924 containerd[1491]: time="2025-05-08T00:41:14.398885105Z" level=info msg="StopPodSandbox for \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" returns successfully" May 8 00:41:14.399269 containerd[1491]: time="2025-05-08T00:41:14.399238292Z" level=info msg="RemovePodSandbox for \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\"" May 8 00:41:14.399269 containerd[1491]: time="2025-05-08T00:41:14.399258894Z" level=info msg="Forcibly stopping sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\"" May 8 00:41:14.399577 containerd[1491]: time="2025-05-08T00:41:14.399396818Z" level=info msg="TearDown network for sandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" successfully" May 8 00:41:14.402374 containerd[1491]: time="2025-05-08T00:41:14.402346190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.402447 containerd[1491]: time="2025-05-08T00:41:14.402384874Z" level=info msg="RemovePodSandbox \"b40e8130042155094025467608123b8e0399b5eae48c78f44df189d00a1dead1\" returns successfully" May 8 00:41:14.402664 containerd[1491]: time="2025-05-08T00:41:14.402645011Z" level=info msg="StopPodSandbox for \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\"" May 8 00:41:14.402742 containerd[1491]: time="2025-05-08T00:41:14.402732650Z" level=info msg="TearDown network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\" successfully" May 8 00:41:14.402807 containerd[1491]: time="2025-05-08T00:41:14.402743201Z" level=info msg="StopPodSandbox for \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\" returns successfully" May 8 00:41:14.402987 containerd[1491]: time="2025-05-08T00:41:14.402967475Z" level=info msg="RemovePodSandbox for \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\"" May 8 00:41:14.403038 containerd[1491]: time="2025-05-08T00:41:14.402987297Z" level=info msg="Forcibly stopping sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\"" May 8 00:41:14.403107 containerd[1491]: time="2025-05-08T00:41:14.403045993Z" level=info msg="TearDown network for sandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\" successfully" May 8 00:41:14.405966 containerd[1491]: time="2025-05-08T00:41:14.405916286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.405966 containerd[1491]: time="2025-05-08T00:41:14.405958620Z" level=info msg="RemovePodSandbox \"97777e45cbf40431556d89cec96a9a0ee1338d7dfab7982da81d00a42b4a222c\" returns successfully" May 8 00:41:14.406234 containerd[1491]: time="2025-05-08T00:41:14.406209566Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:41:14.406316 containerd[1491]: time="2025-05-08T00:41:14.406290815Z" level=info msg="TearDown network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" successfully" May 8 00:41:14.406316 containerd[1491]: time="2025-05-08T00:41:14.406313367Z" level=info msg="StopPodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" returns successfully" May 8 00:41:14.408337 containerd[1491]: time="2025-05-08T00:41:14.406607268Z" level=info msg="RemovePodSandbox for \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:41:14.408337 containerd[1491]: time="2025-05-08T00:41:14.406700598Z" level=info msg="Forcibly stopping sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\"" May 8 00:41:14.408337 containerd[1491]: time="2025-05-08T00:41:14.406798988Z" level=info msg="TearDown network for sandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" successfully" May 8 00:41:14.409429 containerd[1491]: time="2025-05-08T00:41:14.409406802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.409518 containerd[1491]: time="2025-05-08T00:41:14.409500713Z" level=info msg="RemovePodSandbox \"5126f7bb8a56eab67a5ae7e5419a625506d79a6f4fc54b2012eb30bd582a62a4\" returns successfully" May 8 00:41:14.409920 containerd[1491]: time="2025-05-08T00:41:14.409887914Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" May 8 00:41:14.410308 containerd[1491]: time="2025-05-08T00:41:14.410278015Z" level=info msg="TearDown network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" successfully" May 8 00:41:14.410308 containerd[1491]: time="2025-05-08T00:41:14.410299337Z" level=info msg="StopPodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" returns successfully" May 8 00:41:14.410711 containerd[1491]: time="2025-05-08T00:41:14.410690658Z" level=info msg="RemovePodSandbox for \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" May 8 00:41:14.410769 containerd[1491]: time="2025-05-08T00:41:14.410712070Z" level=info msg="Forcibly stopping sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\"" May 8 00:41:14.410826 containerd[1491]: time="2025-05-08T00:41:14.410789308Z" level=info msg="TearDown network for sandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" successfully" May 8 00:41:14.413466 containerd[1491]: time="2025-05-08T00:41:14.413433366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.413697 containerd[1491]: time="2025-05-08T00:41:14.413619037Z" level=info msg="RemovePodSandbox \"ed8d903ce906915e4be59d2c548b1b12e2bfe2370b548a2e49f1a93e52d4d479\" returns successfully" May 8 00:41:14.414156 containerd[1491]: time="2025-05-08T00:41:14.414087156Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\"" May 8 00:41:14.414488 containerd[1491]: time="2025-05-08T00:41:14.414335662Z" level=info msg="TearDown network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" successfully" May 8 00:41:14.414488 containerd[1491]: time="2025-05-08T00:41:14.414350283Z" level=info msg="StopPodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" returns successfully" May 8 00:41:14.414721 containerd[1491]: time="2025-05-08T00:41:14.414683618Z" level=info msg="RemovePodSandbox for \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\"" May 8 00:41:14.415279 containerd[1491]: time="2025-05-08T00:41:14.414830854Z" level=info msg="Forcibly stopping sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\"" May 8 00:41:14.415375 containerd[1491]: time="2025-05-08T00:41:14.414906202Z" level=info msg="TearDown network for sandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" successfully" May 8 00:41:14.418048 containerd[1491]: time="2025-05-08T00:41:14.418023670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.418048 containerd[1491]: time="2025-05-08T00:41:14.418052793Z" level=info msg="RemovePodSandbox \"a7ae4d43448e86b137c6d2d644d0c10b3be0b86b383119241e89a5f951613c56\" returns successfully" May 8 00:41:14.418304 containerd[1491]: time="2025-05-08T00:41:14.418278357Z" level=info msg="StopPodSandbox for \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\"" May 8 00:41:14.418385 containerd[1491]: time="2025-05-08T00:41:14.418367376Z" level=info msg="TearDown network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" successfully" May 8 00:41:14.418417 containerd[1491]: time="2025-05-08T00:41:14.418382138Z" level=info msg="StopPodSandbox for \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" returns successfully" May 8 00:41:14.418687 containerd[1491]: time="2025-05-08T00:41:14.418666798Z" level=info msg="RemovePodSandbox for \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\"" May 8 00:41:14.420541 containerd[1491]: time="2025-05-08T00:41:14.418750697Z" level=info msg="Forcibly stopping sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\"" May 8 00:41:14.420541 containerd[1491]: time="2025-05-08T00:41:14.418861518Z" level=info msg="TearDown network for sandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" successfully" May 8 00:41:14.421496 containerd[1491]: time="2025-05-08T00:41:14.421475954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.421619 containerd[1491]: time="2025-05-08T00:41:14.421588246Z" level=info msg="RemovePodSandbox \"358f015ad9968aec1b9e60bf388cb25b606e34fb2f1fe940ad40da30d1eadde1\" returns successfully" May 8 00:41:14.422011 containerd[1491]: time="2025-05-08T00:41:14.421951764Z" level=info msg="StopPodSandbox for \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\"" May 8 00:41:14.422072 containerd[1491]: time="2025-05-08T00:41:14.422031572Z" level=info msg="TearDown network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\" successfully" May 8 00:41:14.422072 containerd[1491]: time="2025-05-08T00:41:14.422042253Z" level=info msg="StopPodSandbox for \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\" returns successfully" May 8 00:41:14.422452 containerd[1491]: time="2025-05-08T00:41:14.422416803Z" level=info msg="RemovePodSandbox for \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\"" May 8 00:41:14.422701 containerd[1491]: time="2025-05-08T00:41:14.422437705Z" level=info msg="Forcibly stopping sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\"" May 8 00:41:14.422701 containerd[1491]: time="2025-05-08T00:41:14.422522384Z" level=info msg="TearDown network for sandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\" successfully" May 8 00:41:14.425477 containerd[1491]: time="2025-05-08T00:41:14.425423669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.425477 containerd[1491]: time="2025-05-08T00:41:14.425453543Z" level=info msg="RemovePodSandbox \"0ecc4017d2a363e740725dee38eb28d76e7510f795c3602bcd1bfd4130b7522f\" returns successfully" May 8 00:41:14.425677 containerd[1491]: time="2025-05-08T00:41:14.425656984Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:41:14.425751 containerd[1491]: time="2025-05-08T00:41:14.425730852Z" level=info msg="TearDown network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" successfully" May 8 00:41:14.425751 containerd[1491]: time="2025-05-08T00:41:14.425744223Z" level=info msg="StopPodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" returns successfully" May 8 00:41:14.425977 containerd[1491]: time="2025-05-08T00:41:14.425958266Z" level=info msg="RemovePodSandbox for \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:41:14.426023 containerd[1491]: time="2025-05-08T00:41:14.425977418Z" level=info msg="Forcibly stopping sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\"" May 8 00:41:14.426056 containerd[1491]: time="2025-05-08T00:41:14.426036794Z" level=info msg="TearDown network for sandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" successfully" May 8 00:41:14.428443 containerd[1491]: time="2025-05-08T00:41:14.428421535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.428501 containerd[1491]: time="2025-05-08T00:41:14.428451418Z" level=info msg="RemovePodSandbox \"f535e5229a5d7695755f86fad553cb81ac4be47be145a7b384e670233cf760d9\" returns successfully" May 8 00:41:14.428687 containerd[1491]: time="2025-05-08T00:41:14.428667541Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" May 8 00:41:14.428773 containerd[1491]: time="2025-05-08T00:41:14.428738888Z" level=info msg="TearDown network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" successfully" May 8 00:41:14.428808 containerd[1491]: time="2025-05-08T00:41:14.428773362Z" level=info msg="StopPodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" returns successfully" May 8 00:41:14.429097 containerd[1491]: time="2025-05-08T00:41:14.429052471Z" level=info msg="RemovePodSandbox for \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" May 8 00:41:14.429097 containerd[1491]: time="2025-05-08T00:41:14.429092835Z" level=info msg="Forcibly stopping sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\"" May 8 00:41:14.429216 containerd[1491]: time="2025-05-08T00:41:14.429170313Z" level=info msg="TearDown network for sandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" successfully" May 8 00:41:14.431509 containerd[1491]: time="2025-05-08T00:41:14.431480667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.431566 containerd[1491]: time="2025-05-08T00:41:14.431512591Z" level=info msg="RemovePodSandbox \"d9981dcfd683d5350f687c37eaac4eefbad0bb6391cb85325f404dbd63fd4079\" returns successfully" May 8 00:41:14.431802 containerd[1491]: time="2025-05-08T00:41:14.431781769Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\"" May 8 00:41:14.431871 containerd[1491]: time="2025-05-08T00:41:14.431854237Z" level=info msg="TearDown network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" successfully" May 8 00:41:14.431871 containerd[1491]: time="2025-05-08T00:41:14.431866998Z" level=info msg="StopPodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" returns successfully" May 8 00:41:14.432187 containerd[1491]: time="2025-05-08T00:41:14.432144047Z" level=info msg="RemovePodSandbox for \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\"" May 8 00:41:14.432187 containerd[1491]: time="2025-05-08T00:41:14.432179541Z" level=info msg="Forcibly stopping sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\"" May 8 00:41:14.432310 containerd[1491]: time="2025-05-08T00:41:14.432269400Z" level=info msg="TearDown network for sandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" successfully" May 8 00:41:14.436988 containerd[1491]: time="2025-05-08T00:41:14.436895617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.436988 containerd[1491]: time="2025-05-08T00:41:14.436942552Z" level=info msg="RemovePodSandbox \"56c46a472ab597cec1516d84242cba4726423739869d0cb3f172bd4df3f1cd42\" returns successfully" May 8 00:41:14.437459 containerd[1491]: time="2025-05-08T00:41:14.437316231Z" level=info msg="StopPodSandbox for \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\"" May 8 00:41:14.437459 containerd[1491]: time="2025-05-08T00:41:14.437406340Z" level=info msg="TearDown network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" successfully" May 8 00:41:14.437459 containerd[1491]: time="2025-05-08T00:41:14.437415931Z" level=info msg="StopPodSandbox for \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" returns successfully" May 8 00:41:14.438623 containerd[1491]: time="2025-05-08T00:41:14.438227698Z" level=info msg="RemovePodSandbox for \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\"" May 8 00:41:14.438623 containerd[1491]: time="2025-05-08T00:41:14.438276033Z" level=info msg="Forcibly stopping sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\"" May 8 00:41:14.438623 containerd[1491]: time="2025-05-08T00:41:14.438364342Z" level=info msg="TearDown network for sandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" successfully" May 8 00:41:14.442199 containerd[1491]: time="2025-05-08T00:41:14.442178764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.442374 containerd[1491]: time="2025-05-08T00:41:14.442304657Z" level=info msg="RemovePodSandbox \"8c56c83ccbd99ba3d763c33352608bfdaac4bff8de56cd1e3dfcae5811061ea8\" returns successfully" May 8 00:41:14.442705 containerd[1491]: time="2025-05-08T00:41:14.442669605Z" level=info msg="StopPodSandbox for \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\"" May 8 00:41:14.442806 containerd[1491]: time="2025-05-08T00:41:14.442787117Z" level=info msg="TearDown network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\" successfully" May 8 00:41:14.442806 containerd[1491]: time="2025-05-08T00:41:14.442802449Z" level=info msg="StopPodSandbox for \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\" returns successfully" May 8 00:41:14.444665 containerd[1491]: time="2025-05-08T00:41:14.443076218Z" level=info msg="RemovePodSandbox for \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\"" May 8 00:41:14.444665 containerd[1491]: time="2025-05-08T00:41:14.443097720Z" level=info msg="Forcibly stopping sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\"" May 8 00:41:14.444665 containerd[1491]: time="2025-05-08T00:41:14.443159206Z" level=info msg="TearDown network for sandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\" successfully" May 8 00:41:14.447434 containerd[1491]: time="2025-05-08T00:41:14.447396032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.447485 containerd[1491]: time="2025-05-08T00:41:14.447456119Z" level=info msg="RemovePodSandbox \"dd02525965a1e7c0bd8707c4b22dc7f5aa2c392b2e078313642bd058e236b055\" returns successfully" May 8 00:41:14.447769 containerd[1491]: time="2025-05-08T00:41:14.447698645Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:41:14.447868 containerd[1491]: time="2025-05-08T00:41:14.447792715Z" level=info msg="TearDown network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" successfully" May 8 00:41:14.447868 containerd[1491]: time="2025-05-08T00:41:14.447802826Z" level=info msg="StopPodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" returns successfully" May 8 00:41:14.448097 containerd[1491]: time="2025-05-08T00:41:14.448014868Z" level=info msg="RemovePodSandbox for \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:41:14.448097 containerd[1491]: time="2025-05-08T00:41:14.448036530Z" level=info msg="Forcibly stopping sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\"" May 8 00:41:14.448156 containerd[1491]: time="2025-05-08T00:41:14.448094476Z" level=info msg="TearDown network for sandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" successfully" May 8 00:41:14.461519 containerd[1491]: time="2025-05-08T00:41:14.461237900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.461519 containerd[1491]: time="2025-05-08T00:41:14.461290435Z" level=info msg="RemovePodSandbox \"465909cee8f416659d3fd2ff2fcb9d3d1416879d0166ae9d9059e9f290b8f6b5\" returns successfully" May 8 00:41:14.462026 containerd[1491]: time="2025-05-08T00:41:14.462004801Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" May 8 00:41:14.462466 containerd[1491]: time="2025-05-08T00:41:14.462146856Z" level=info msg="TearDown network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" successfully" May 8 00:41:14.463834 containerd[1491]: time="2025-05-08T00:41:14.462548528Z" level=info msg="StopPodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" returns successfully" May 8 00:41:14.464068 containerd[1491]: time="2025-05-08T00:41:14.464049726Z" level=info msg="RemovePodSandbox for \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" May 8 00:41:14.464576 containerd[1491]: time="2025-05-08T00:41:14.464517265Z" level=info msg="Forcibly stopping sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\"" May 8 00:41:14.464988 containerd[1491]: time="2025-05-08T00:41:14.464833198Z" level=info msg="TearDown network for sandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" successfully" May 8 00:41:14.470401 containerd[1491]: time="2025-05-08T00:41:14.470366501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.470446 containerd[1491]: time="2025-05-08T00:41:14.470408146Z" level=info msg="RemovePodSandbox \"a7b1a151b4a8f7dfb55c8eaea6094f30859a315a82c65c4b754dc7a59d5ae806\" returns successfully" May 8 00:41:14.470690 containerd[1491]: time="2025-05-08T00:41:14.470663362Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\"" May 8 00:41:14.470834 containerd[1491]: time="2025-05-08T00:41:14.470779895Z" level=info msg="TearDown network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" successfully" May 8 00:41:14.470834 containerd[1491]: time="2025-05-08T00:41:14.470800377Z" level=info msg="StopPodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" returns successfully" May 8 00:41:14.471283 containerd[1491]: time="2025-05-08T00:41:14.471249244Z" level=info msg="RemovePodSandbox for \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\"" May 8 00:41:14.471672 containerd[1491]: time="2025-05-08T00:41:14.471653327Z" level=info msg="Forcibly stopping sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\"" May 8 00:41:14.471960 containerd[1491]: time="2025-05-08T00:41:14.471802283Z" level=info msg="TearDown network for sandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" successfully" May 8 00:41:14.476837 containerd[1491]: time="2025-05-08T00:41:14.476778776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.476999 containerd[1491]: time="2025-05-08T00:41:14.476961975Z" level=info msg="RemovePodSandbox \"f067a880c08339961367a5e5e32bc68183182676f19b3aa24c28dfd0dbec1cf3\" returns successfully" May 8 00:41:14.477416 containerd[1491]: time="2025-05-08T00:41:14.477291350Z" level=info msg="StopPodSandbox for \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\"" May 8 00:41:14.477416 containerd[1491]: time="2025-05-08T00:41:14.477370028Z" level=info msg="TearDown network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" successfully" May 8 00:41:14.477416 containerd[1491]: time="2025-05-08T00:41:14.477380459Z" level=info msg="StopPodSandbox for \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" returns successfully" May 8 00:41:14.477911 containerd[1491]: time="2025-05-08T00:41:14.477785063Z" level=info msg="RemovePodSandbox for \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\"" May 8 00:41:14.477911 containerd[1491]: time="2025-05-08T00:41:14.477803765Z" level=info msg="Forcibly stopping sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\"" May 8 00:41:14.477911 containerd[1491]: time="2025-05-08T00:41:14.477864771Z" level=info msg="TearDown network for sandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" successfully" May 8 00:41:14.480463 containerd[1491]: time="2025-05-08T00:41:14.480354623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.480463 containerd[1491]: time="2025-05-08T00:41:14.480401598Z" level=info msg="RemovePodSandbox \"f93c4ce7b7dc46206854e36e8af1949ccc45d66e6436abd3879374357fb4df9a\" returns successfully" May 8 00:41:14.480888 containerd[1491]: time="2025-05-08T00:41:14.480705480Z" level=info msg="StopPodSandbox for \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\"" May 8 00:41:14.480888 containerd[1491]: time="2025-05-08T00:41:14.480801240Z" level=info msg="TearDown network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\" successfully" May 8 00:41:14.480888 containerd[1491]: time="2025-05-08T00:41:14.480810981Z" level=info msg="StopPodSandbox for \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\" returns successfully" May 8 00:41:14.481440 containerd[1491]: time="2025-05-08T00:41:14.481221684Z" level=info msg="RemovePodSandbox for \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\"" May 8 00:41:14.481440 containerd[1491]: time="2025-05-08T00:41:14.481327385Z" level=info msg="Forcibly stopping sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\"" May 8 00:41:14.481440 containerd[1491]: time="2025-05-08T00:41:14.481392522Z" level=info msg="TearDown network for sandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\" successfully" May 8 00:41:14.483748 containerd[1491]: time="2025-05-08T00:41:14.483728459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.483940 containerd[1491]: time="2025-05-08T00:41:14.483852572Z" level=info msg="RemovePodSandbox \"cf161ceeba41ad0dd57f40f146a84c09fbb7afcb80b38d4c7e5cc2f27c797913\" returns successfully" May 8 00:41:14.484323 containerd[1491]: time="2025-05-08T00:41:14.484135921Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:41:14.484323 containerd[1491]: time="2025-05-08T00:41:14.484204398Z" level=info msg="TearDown network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" successfully" May 8 00:41:14.484323 containerd[1491]: time="2025-05-08T00:41:14.484213359Z" level=info msg="StopPodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" returns successfully" May 8 00:41:14.484593 containerd[1491]: time="2025-05-08T00:41:14.484548444Z" level=info msg="RemovePodSandbox for \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:41:14.484627 containerd[1491]: time="2025-05-08T00:41:14.484598010Z" level=info msg="Forcibly stopping sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\"" May 8 00:41:14.484729 containerd[1491]: time="2025-05-08T00:41:14.484688489Z" level=info msg="TearDown network for sandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" successfully" May 8 00:41:14.487311 containerd[1491]: time="2025-05-08T00:41:14.487281402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.487370 containerd[1491]: time="2025-05-08T00:41:14.487323246Z" level=info msg="RemovePodSandbox \"e36847f1022454455c0537e8d607b435eb6381bb7c72e55a2b63c876d38de5d1\" returns successfully" May 8 00:41:14.487697 containerd[1491]: time="2025-05-08T00:41:14.487568523Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" May 8 00:41:14.487697 containerd[1491]: time="2025-05-08T00:41:14.487644281Z" level=info msg="TearDown network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" successfully" May 8 00:41:14.487697 containerd[1491]: time="2025-05-08T00:41:14.487653682Z" level=info msg="StopPodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" returns successfully" May 8 00:41:14.487936 containerd[1491]: time="2025-05-08T00:41:14.487894407Z" level=info msg="RemovePodSandbox for \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" May 8 00:41:14.487936 containerd[1491]: time="2025-05-08T00:41:14.487920280Z" level=info msg="Forcibly stopping sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\"" May 8 00:41:14.488024 containerd[1491]: time="2025-05-08T00:41:14.487986827Z" level=info msg="TearDown network for sandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" successfully" May 8 00:41:14.490688 containerd[1491]: time="2025-05-08T00:41:14.490546036Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.490688 containerd[1491]: time="2025-05-08T00:41:14.490586580Z" level=info msg="RemovePodSandbox \"45f1bc4188b40d89deab8c14e66151ace4b21c03ffba5d31b3a90a268b77ec65\" returns successfully" May 8 00:41:14.490859 containerd[1491]: time="2025-05-08T00:41:14.490824565Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\"" May 8 00:41:14.490912 containerd[1491]: time="2025-05-08T00:41:14.490892222Z" level=info msg="TearDown network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" successfully" May 8 00:41:14.490912 containerd[1491]: time="2025-05-08T00:41:14.490908674Z" level=info msg="StopPodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" returns successfully" May 8 00:41:14.491138 containerd[1491]: time="2025-05-08T00:41:14.491098314Z" level=info msg="RemovePodSandbox for \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\"" May 8 00:41:14.491138 containerd[1491]: time="2025-05-08T00:41:14.491122176Z" level=info msg="Forcibly stopping sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\"" May 8 00:41:14.491231 containerd[1491]: time="2025-05-08T00:41:14.491180822Z" level=info msg="TearDown network for sandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" successfully" May 8 00:41:14.493580 containerd[1491]: time="2025-05-08T00:41:14.493547312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.493633 containerd[1491]: time="2025-05-08T00:41:14.493583366Z" level=info msg="RemovePodSandbox \"d43a6f010b5fd1738d7377a5a4fa7dbf6e3bb7a7c714a1599d9433c9ddd1269a\" returns successfully" May 8 00:41:14.493833 containerd[1491]: time="2025-05-08T00:41:14.493813430Z" level=info msg="StopPodSandbox for \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\"" May 8 00:41:14.493904 containerd[1491]: time="2025-05-08T00:41:14.493886858Z" level=info msg="TearDown network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" successfully" May 8 00:41:14.493904 containerd[1491]: time="2025-05-08T00:41:14.493901189Z" level=info msg="StopPodSandbox for \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" returns successfully" May 8 00:41:14.494212 containerd[1491]: time="2025-05-08T00:41:14.494109951Z" level=info msg="RemovePodSandbox for \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\"" May 8 00:41:14.494257 containerd[1491]: time="2025-05-08T00:41:14.494228014Z" level=info msg="Forcibly stopping sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\"" May 8 00:41:14.494318 containerd[1491]: time="2025-05-08T00:41:14.494285440Z" level=info msg="TearDown network for sandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" successfully" May 8 00:41:14.496688 containerd[1491]: time="2025-05-08T00:41:14.496652779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.496736 containerd[1491]: time="2025-05-08T00:41:14.496705434Z" level=info msg="RemovePodSandbox \"002f1030982970476d0aff339dbf3ae6da9692c85bec819c8ff642d9c34ef3f6\" returns successfully" May 8 00:41:14.497095 containerd[1491]: time="2025-05-08T00:41:14.496975683Z" level=info msg="StopPodSandbox for \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\"" May 8 00:41:14.497268 containerd[1491]: time="2025-05-08T00:41:14.497173463Z" level=info msg="TearDown network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\" successfully" May 8 00:41:14.497268 containerd[1491]: time="2025-05-08T00:41:14.497192065Z" level=info msg="StopPodSandbox for \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\" returns successfully" May 8 00:41:14.497476 containerd[1491]: time="2025-05-08T00:41:14.497450312Z" level=info msg="RemovePodSandbox for \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\"" May 8 00:41:14.497516 containerd[1491]: time="2025-05-08T00:41:14.497476805Z" level=info msg="Forcibly stopping sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\"" May 8 00:41:14.497616 containerd[1491]: time="2025-05-08T00:41:14.497540403Z" level=info msg="TearDown network for sandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\" successfully" May 8 00:41:14.499834 containerd[1491]: time="2025-05-08T00:41:14.499801271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.499880 containerd[1491]: time="2025-05-08T00:41:14.499836074Z" level=info msg="RemovePodSandbox \"d693bf4de41fc7c31dff976fede29b497274b1d20374fa4b7732b73a692fa7be\" returns successfully" May 8 00:41:14.500220 containerd[1491]: time="2025-05-08T00:41:14.500088041Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:41:14.500220 containerd[1491]: time="2025-05-08T00:41:14.500167219Z" level=info msg="TearDown network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" successfully" May 8 00:41:14.500220 containerd[1491]: time="2025-05-08T00:41:14.500177000Z" level=info msg="StopPodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" returns successfully" May 8 00:41:14.500402 containerd[1491]: time="2025-05-08T00:41:14.500351608Z" level=info msg="RemovePodSandbox for \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:41:14.500402 containerd[1491]: time="2025-05-08T00:41:14.500377531Z" level=info msg="Forcibly stopping sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\"" May 8 00:41:14.500498 containerd[1491]: time="2025-05-08T00:41:14.500443848Z" level=info msg="TearDown network for sandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" successfully" May 8 00:41:14.502821 containerd[1491]: time="2025-05-08T00:41:14.502790605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.502871 containerd[1491]: time="2025-05-08T00:41:14.502834370Z" level=info msg="RemovePodSandbox \"ec3e39baa4107da07a1de046629bb977f524f24567f62ebb874be4701f1b325b\" returns successfully" May 8 00:41:14.503071 containerd[1491]: time="2025-05-08T00:41:14.503039121Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" May 8 00:41:14.503133 containerd[1491]: time="2025-05-08T00:41:14.503115729Z" level=info msg="TearDown network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" successfully" May 8 00:41:14.503133 containerd[1491]: time="2025-05-08T00:41:14.503129951Z" level=info msg="StopPodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" returns successfully" May 8 00:41:14.503451 containerd[1491]: time="2025-05-08T00:41:14.503355124Z" level=info msg="RemovePodSandbox for \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" May 8 00:41:14.503501 containerd[1491]: time="2025-05-08T00:41:14.503451964Z" level=info msg="Forcibly stopping sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\"" May 8 00:41:14.503539 containerd[1491]: time="2025-05-08T00:41:14.503511081Z" level=info msg="TearDown network for sandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" successfully" May 8 00:41:14.505811 containerd[1491]: time="2025-05-08T00:41:14.505780190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.505865 containerd[1491]: time="2025-05-08T00:41:14.505814714Z" level=info msg="RemovePodSandbox \"fa7065da0118e04f1a126e4be64179d0e738f9e757d1afb2770b67e71279329b\" returns successfully" May 8 00:41:14.506070 containerd[1491]: time="2025-05-08T00:41:14.506034717Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\"" May 8 00:41:14.506140 containerd[1491]: time="2025-05-08T00:41:14.506108395Z" level=info msg="TearDown network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" successfully" May 8 00:41:14.506140 containerd[1491]: time="2025-05-08T00:41:14.506125847Z" level=info msg="StopPodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" returns successfully" May 8 00:41:14.506420 containerd[1491]: time="2025-05-08T00:41:14.506378993Z" level=info msg="RemovePodSandbox for \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\"" May 8 00:41:14.506420 containerd[1491]: time="2025-05-08T00:41:14.506402416Z" level=info msg="Forcibly stopping sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\"" May 8 00:41:14.506490 containerd[1491]: time="2025-05-08T00:41:14.506463052Z" level=info msg="TearDown network for sandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" successfully" May 8 00:41:14.508907 containerd[1491]: time="2025-05-08T00:41:14.508824930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.508907 containerd[1491]: time="2025-05-08T00:41:14.508857694Z" level=info msg="RemovePodSandbox \"58ec37b751dbdc97a55651befa24f73c0fd968b55015f3df068e12e2aaaeb962\" returns successfully" May 8 00:41:14.509249 containerd[1491]: time="2025-05-08T00:41:14.509153715Z" level=info msg="StopPodSandbox for \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\"" May 8 00:41:14.509304 containerd[1491]: time="2025-05-08T00:41:14.509285529Z" level=info msg="TearDown network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" successfully" May 8 00:41:14.509304 containerd[1491]: time="2025-05-08T00:41:14.509296960Z" level=info msg="StopPodSandbox for \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" returns successfully" May 8 00:41:14.511913 containerd[1491]: time="2025-05-08T00:41:14.510016166Z" level=info msg="RemovePodSandbox for \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\"" May 8 00:41:14.511913 containerd[1491]: time="2025-05-08T00:41:14.510086594Z" level=info msg="Forcibly stopping sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\"" May 8 00:41:14.511913 containerd[1491]: time="2025-05-08T00:41:14.510188444Z" level=info msg="TearDown network for sandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" successfully" May 8 00:41:14.513250 containerd[1491]: time="2025-05-08T00:41:14.513228984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.513433 containerd[1491]: time="2025-05-08T00:41:14.513392581Z" level=info msg="RemovePodSandbox \"2fad8fc95c1f7bc5173d78cef14bae77a709fb3f7538a4f8bd917d24cd992f77\" returns successfully" May 8 00:41:14.513989 containerd[1491]: time="2025-05-08T00:41:14.513971673Z" level=info msg="StopPodSandbox for \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\"" May 8 00:41:14.514117 containerd[1491]: time="2025-05-08T00:41:14.514102856Z" level=info msg="TearDown network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\" successfully" May 8 00:41:14.514165 containerd[1491]: time="2025-05-08T00:41:14.514152852Z" level=info msg="StopPodSandbox for \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\" returns successfully" May 8 00:41:14.514445 containerd[1491]: time="2025-05-08T00:41:14.514428221Z" level=info msg="RemovePodSandbox for \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\"" May 8 00:41:14.514617 containerd[1491]: time="2025-05-08T00:41:14.514602639Z" level=info msg="Forcibly stopping sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\"" May 8 00:41:14.514819 containerd[1491]: time="2025-05-08T00:41:14.514791229Z" level=info msg="TearDown network for sandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\" successfully" May 8 00:41:14.517260 containerd[1491]: time="2025-05-08T00:41:14.517225265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:14.517542 containerd[1491]: time="2025-05-08T00:41:14.517345587Z" level=info msg="RemovePodSandbox \"f35dcf509b4c448d622d2924dcdf2133b7190231fa725da0eea9f599049520d7\" returns successfully" May 8 00:41:25.618132 systemd[1]: run-containerd-runc-k8s.io-4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8-runc.GDYdKv.mount: Deactivated successfully. May 8 00:41:37.394420 kubelet[2621]: E0508 00:41:37.394379 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:40.395015 kubelet[2621]: E0508 00:41:40.394208 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:43.394881 kubelet[2621]: E0508 00:41:43.394347 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:47.393768 kubelet[2621]: E0508 00:41:47.393736 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:48.394569 kubelet[2621]: E0508 00:41:48.393717 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:41:58.395342 kubelet[2621]: E0508 00:41:58.394527 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:18.394623 kubelet[2621]: E0508 00:42:18.394261 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:19.594003 systemd[1]: Started sshd@8-172.237.145.87:22-139.178.89.65:44906.service - OpenSSH per-connection server daemon (139.178.89.65:44906). May 8 00:42:19.925506 sshd[5860]: Accepted publickey for core from 139.178.89.65 port 44906 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:19.927362 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:19.934078 systemd-logind[1468]: New session 8 of user core. May 8 00:42:19.941016 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:42:20.276700 sshd[5862]: Connection closed by 139.178.89.65 port 44906 May 8 00:42:20.277275 sshd-session[5860]: pam_unix(sshd:session): session closed for user core May 8 00:42:20.281302 systemd[1]: sshd@8-172.237.145.87:22-139.178.89.65:44906.service: Deactivated successfully. May 8 00:42:20.283312 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:42:20.284046 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. May 8 00:42:20.285426 systemd-logind[1468]: Removed session 8. May 8 00:42:23.394109 kubelet[2621]: E0508 00:42:23.394077 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:25.347011 systemd[1]: Started sshd@9-172.237.145.87:22-139.178.89.65:44918.service - OpenSSH per-connection server daemon (139.178.89.65:44918). May 8 00:42:25.680167 sshd[5890]: Accepted publickey for core from 139.178.89.65 port 44918 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:25.681619 sshd-session[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:25.686605 systemd-logind[1468]: New session 9 of user core. May 8 00:42:25.691889 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:42:25.989149 sshd[5910]: Connection closed by 139.178.89.65 port 44918 May 8 00:42:25.989935 sshd-session[5890]: pam_unix(sshd:session): session closed for user core May 8 00:42:25.993730 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. May 8 00:42:25.994891 systemd[1]: sshd@9-172.237.145.87:22-139.178.89.65:44918.service: Deactivated successfully. May 8 00:42:25.997673 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:42:25.999111 systemd-logind[1468]: Removed session 9. May 8 00:42:31.057826 systemd[1]: Started sshd@10-172.237.145.87:22-139.178.89.65:36918.service - OpenSSH per-connection server daemon (139.178.89.65:36918). May 8 00:42:31.380795 sshd[5943]: Accepted publickey for core from 139.178.89.65 port 36918 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:31.382127 sshd-session[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:31.386615 systemd-logind[1468]: New session 10 of user core. May 8 00:42:31.392904 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:42:31.678973 sshd[5945]: Connection closed by 139.178.89.65 port 36918 May 8 00:42:31.680073 sshd-session[5943]: pam_unix(sshd:session): session closed for user core May 8 00:42:31.684148 systemd[1]: sshd@10-172.237.145.87:22-139.178.89.65:36918.service: Deactivated successfully. May 8 00:42:31.686310 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:42:31.687575 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. May 8 00:42:31.688465 systemd-logind[1468]: Removed session 10. May 8 00:42:31.753346 systemd[1]: Started sshd@11-172.237.145.87:22-139.178.89.65:36922.service - OpenSSH per-connection server daemon (139.178.89.65:36922). May 8 00:42:32.090104 sshd[5958]: Accepted publickey for core from 139.178.89.65 port 36922 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:32.091704 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:32.097485 systemd-logind[1468]: New session 11 of user core. May 8 00:42:32.104879 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:42:32.432078 sshd[5960]: Connection closed by 139.178.89.65 port 36922 May 8 00:42:32.433157 sshd-session[5958]: pam_unix(sshd:session): session closed for user core May 8 00:42:32.437569 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. May 8 00:42:32.441153 systemd[1]: sshd@11-172.237.145.87:22-139.178.89.65:36922.service: Deactivated successfully. May 8 00:42:32.445114 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:42:32.446472 systemd-logind[1468]: Removed session 11. May 8 00:42:32.499965 systemd[1]: Started sshd@12-172.237.145.87:22-139.178.89.65:36928.service - OpenSSH per-connection server daemon (139.178.89.65:36928). May 8 00:42:32.844798 sshd[5969]: Accepted publickey for core from 139.178.89.65 port 36928 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:32.846794 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:32.851931 systemd-logind[1468]: New session 12 of user core. May 8 00:42:32.856012 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:42:33.164532 sshd[5971]: Connection closed by 139.178.89.65 port 36928 May 8 00:42:33.165421 sshd-session[5969]: pam_unix(sshd:session): session closed for user core May 8 00:42:33.170465 systemd[1]: sshd@12-172.237.145.87:22-139.178.89.65:36928.service: Deactivated successfully. May 8 00:42:33.172845 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:42:33.174179 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. May 8 00:42:33.175294 systemd-logind[1468]: Removed session 12. May 8 00:42:37.810389 systemd[1]: run-containerd-runc-k8s.io-41dd7d11b0386b4405a6d9ce66357a8dfe653656cfc68a3c8c082e789ff861a6-runc.lQ9Dk5.mount: Deactivated successfully. May 8 00:42:38.233177 systemd[1]: Started sshd@13-172.237.145.87:22-139.178.89.65:43272.service - OpenSSH per-connection server daemon (139.178.89.65:43272). May 8 00:42:38.552390 sshd[6006]: Accepted publickey for core from 139.178.89.65 port 43272 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:38.553661 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:38.558135 systemd-logind[1468]: New session 13 of user core. May 8 00:42:38.565876 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:42:38.856824 sshd[6008]: Connection closed by 139.178.89.65 port 43272 May 8 00:42:38.857916 sshd-session[6006]: pam_unix(sshd:session): session closed for user core May 8 00:42:38.861703 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. May 8 00:42:38.862660 systemd[1]: sshd@13-172.237.145.87:22-139.178.89.65:43272.service: Deactivated successfully. May 8 00:42:38.865286 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:42:38.866173 systemd-logind[1468]: Removed session 13. May 8 00:42:38.922941 systemd[1]: Started sshd@14-172.237.145.87:22-139.178.89.65:43282.service - OpenSSH per-connection server daemon (139.178.89.65:43282). May 8 00:42:39.251575 sshd[6020]: Accepted publickey for core from 139.178.89.65 port 43282 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:39.252981 sshd-session[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:39.257390 systemd-logind[1468]: New session 14 of user core. May 8 00:42:39.262894 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:42:39.924539 sshd[6022]: Connection closed by 139.178.89.65 port 43282 May 8 00:42:39.925093 sshd-session[6020]: pam_unix(sshd:session): session closed for user core May 8 00:42:39.929741 systemd[1]: sshd@14-172.237.145.87:22-139.178.89.65:43282.service: Deactivated successfully. May 8 00:42:39.932086 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:42:39.932871 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. May 8 00:42:39.934455 systemd-logind[1468]: Removed session 14. May 8 00:42:39.988982 systemd[1]: Started sshd@15-172.237.145.87:22-139.178.89.65:43298.service - OpenSSH per-connection server daemon (139.178.89.65:43298). May 8 00:42:40.310218 sshd[6032]: Accepted publickey for core from 139.178.89.65 port 43298 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:40.311706 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:40.316552 systemd-logind[1468]: New session 15 of user core. May 8 00:42:40.324904 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:42:41.113597 sshd[6034]: Connection closed by 139.178.89.65 port 43298 May 8 00:42:41.114558 sshd-session[6032]: pam_unix(sshd:session): session closed for user core May 8 00:42:41.117851 systemd[1]: sshd@15-172.237.145.87:22-139.178.89.65:43298.service: Deactivated successfully. May 8 00:42:41.120272 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:42:41.122029 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. May 8 00:42:41.123628 systemd-logind[1468]: Removed session 15. May 8 00:42:41.176976 systemd[1]: Started sshd@16-172.237.145.87:22-139.178.89.65:43312.service - OpenSSH per-connection server daemon (139.178.89.65:43312). May 8 00:42:41.502844 sshd[6051]: Accepted publickey for core from 139.178.89.65 port 43312 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:41.503899 sshd-session[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:41.508745 systemd-logind[1468]: New session 16 of user core. May 8 00:42:41.513974 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:42:41.904199 sshd[6053]: Connection closed by 139.178.89.65 port 43312 May 8 00:42:41.905079 sshd-session[6051]: pam_unix(sshd:session): session closed for user core May 8 00:42:41.909567 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. May 8 00:42:41.910492 systemd[1]: sshd@16-172.237.145.87:22-139.178.89.65:43312.service: Deactivated successfully. May 8 00:42:41.913221 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:42:41.914297 systemd-logind[1468]: Removed session 16. May 8 00:42:41.967991 systemd[1]: Started sshd@17-172.237.145.87:22-139.178.89.65:43318.service - OpenSSH per-connection server daemon (139.178.89.65:43318). May 8 00:42:42.289849 sshd[6063]: Accepted publickey for core from 139.178.89.65 port 43318 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:42.291501 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:42.296625 systemd-logind[1468]: New session 17 of user core. May 8 00:42:42.304888 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:42:42.585474 sshd[6065]: Connection closed by 139.178.89.65 port 43318 May 8 00:42:42.586450 sshd-session[6063]: pam_unix(sshd:session): session closed for user core May 8 00:42:42.591289 systemd[1]: sshd@17-172.237.145.87:22-139.178.89.65:43318.service: Deactivated successfully. May 8 00:42:42.594305 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:42:42.595444 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. May 8 00:42:42.596458 systemd-logind[1468]: Removed session 17. May 8 00:42:47.652115 systemd[1]: Started sshd@18-172.237.145.87:22-139.178.89.65:46330.service - OpenSSH per-connection server daemon (139.178.89.65:46330). May 8 00:42:48.006227 sshd[6078]: Accepted publickey for core from 139.178.89.65 port 46330 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:48.008124 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:48.012554 systemd-logind[1468]: New session 18 of user core. May 8 00:42:48.016892 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:42:48.314288 sshd[6080]: Connection closed by 139.178.89.65 port 46330 May 8 00:42:48.314861 sshd-session[6078]: pam_unix(sshd:session): session closed for user core May 8 00:42:48.318665 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. May 8 00:42:48.319577 systemd[1]: sshd@18-172.237.145.87:22-139.178.89.65:46330.service: Deactivated successfully. May 8 00:42:48.322052 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:42:48.323126 systemd-logind[1468]: Removed session 18. May 8 00:42:53.379986 systemd[1]: Started sshd@19-172.237.145.87:22-139.178.89.65:46338.service - OpenSSH per-connection server daemon (139.178.89.65:46338). May 8 00:42:53.395160 kubelet[2621]: E0508 00:42:53.394405 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:53.540378 systemd[1]: run-containerd-runc-k8s.io-4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8-runc.ho4gGw.mount: Deactivated successfully. May 8 00:42:53.713939 sshd[6095]: Accepted publickey for core from 139.178.89.65 port 46338 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:53.715654 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:53.720564 systemd-logind[1468]: New session 19 of user core. May 8 00:42:53.724883 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:42:54.012975 sshd[6116]: Connection closed by 139.178.89.65 port 46338 May 8 00:42:54.013711 sshd-session[6095]: pam_unix(sshd:session): session closed for user core May 8 00:42:54.018029 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. May 8 00:42:54.019269 systemd[1]: sshd@19-172.237.145.87:22-139.178.89.65:46338.service: Deactivated successfully. May 8 00:42:54.021264 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:42:54.022227 systemd-logind[1468]: Removed session 19. May 8 00:42:55.616730 systemd[1]: run-containerd-runc-k8s.io-4122976fea92ad72cbfb8809f281933f47faf3a640387dd8f386dce6c1b7d7b8-runc.6VJWM7.mount: Deactivated successfully. May 8 00:42:56.394703 kubelet[2621]: E0508 00:42:56.394389 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:42:59.076967 systemd[1]: Started sshd@20-172.237.145.87:22-139.178.89.65:38028.service - OpenSSH per-connection server daemon (139.178.89.65:38028). May 8 00:42:59.407287 sshd[6147]: Accepted publickey for core from 139.178.89.65 port 38028 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:42:59.408698 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:42:59.413595 systemd-logind[1468]: New session 20 of user core. May 8 00:42:59.419885 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:42:59.704923 sshd[6149]: Connection closed by 139.178.89.65 port 38028 May 8 00:42:59.706017 sshd-session[6147]: pam_unix(sshd:session): session closed for user core May 8 00:42:59.710921 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. May 8 00:42:59.711916 systemd[1]: sshd@20-172.237.145.87:22-139.178.89.65:38028.service: Deactivated successfully. May 8 00:42:59.714191 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:42:59.715271 systemd-logind[1468]: Removed session 20. May 8 00:43:04.394163 kubelet[2621]: E0508 00:43:04.393957 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:43:04.772126 systemd[1]: Started sshd@21-172.237.145.87:22-139.178.89.65:38044.service - OpenSSH per-connection server daemon (139.178.89.65:38044). May 8 00:43:05.103709 sshd[6161]: Accepted publickey for core from 139.178.89.65 port 38044 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:05.105140 sshd-session[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:05.109963 systemd-logind[1468]: New session 21 of user core. May 8 00:43:05.114881 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:43:05.405600 sshd[6163]: Connection closed by 139.178.89.65 port 38044 May 8 00:43:05.406485 sshd-session[6161]: pam_unix(sshd:session): session closed for user core May 8 00:43:05.410670 systemd[1]: sshd@21-172.237.145.87:22-139.178.89.65:38044.service: Deactivated successfully. May 8 00:43:05.413327 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:43:05.414296 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. May 8 00:43:05.415131 systemd-logind[1468]: Removed session 21. May 8 00:43:06.394306 kubelet[2621]: E0508 00:43:06.393989 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:43:07.813029 systemd[1]: run-containerd-runc-k8s.io-41dd7d11b0386b4405a6d9ce66357a8dfe653656cfc68a3c8c082e789ff861a6-runc.89Axux.mount: Deactivated successfully. May 8 00:43:08.394726 kubelet[2621]: E0508 00:43:08.393999 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 8 00:43:10.469954 systemd[1]: Started sshd@22-172.237.145.87:22-139.178.89.65:37228.service - OpenSSH per-connection server daemon (139.178.89.65:37228). May 8 00:43:10.810016 sshd[6197]: Accepted publickey for core from 139.178.89.65 port 37228 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:10.811557 sshd-session[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:10.816589 systemd-logind[1468]: New session 22 of user core. May 8 00:43:10.819013 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:43:11.118956 sshd[6199]: Connection closed by 139.178.89.65 port 37228 May 8 00:43:11.119814 sshd-session[6197]: pam_unix(sshd:session): session closed for user core May 8 00:43:11.124033 systemd[1]: sshd@22-172.237.145.87:22-139.178.89.65:37228.service: Deactivated successfully. May 8 00:43:11.127393 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:43:11.128218 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. May 8 00:43:11.129355 systemd-logind[1468]: Removed session 22. May 8 00:43:16.183002 systemd[1]: Started sshd@23-172.237.145.87:22-139.178.89.65:37230.service - OpenSSH per-connection server daemon (139.178.89.65:37230). May 8 00:43:16.524254 sshd[6214]: Accepted publickey for core from 139.178.89.65 port 37230 ssh2: RSA SHA256:kUHV1ZiXTJd09dq8lE1DqQ3ajymPQRcbe3cwUy3iBHA May 8 00:43:16.525872 sshd-session[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:16.530312 systemd-logind[1468]: New session 23 of user core. May 8 00:43:16.535888 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:43:16.830031 sshd[6216]: Connection closed by 139.178.89.65 port 37230 May 8 00:43:16.831549 sshd-session[6214]: pam_unix(sshd:session): session closed for user core May 8 00:43:16.835316 systemd[1]: sshd@23-172.237.145.87:22-139.178.89.65:37230.service: Deactivated successfully. May 8 00:43:16.837732 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:43:16.838677 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. May 8 00:43:16.839825 systemd-logind[1468]: Removed session 23. May 8 00:43:19.393808 kubelet[2621]: E0508 00:43:19.393779 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18"