Dec 13 13:27:31.867158 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:27:31.867180 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:31.867192 kernel: BIOS-provided physical RAM map: Dec 13 13:27:31.867199 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:27:31.867205 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:27:31.867212 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:27:31.867220 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 13:27:31.867226 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 13:27:31.867233 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 13:27:31.867242 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 13:27:31.867248 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:27:31.867255 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:27:31.867261 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:27:31.867268 kernel: NX (Execute Disable) protection: active Dec 13 13:27:31.867276 kernel: APIC: Static calls initialized Dec 13 13:27:31.867285 kernel: SMBIOS 2.8 present. Dec 13 13:27:31.867320 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 13:27:31.867330 kernel: Hypervisor detected: KVM Dec 13 13:27:31.867338 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:27:31.867345 kernel: kvm-clock: using sched offset of 2295513737 cycles Dec 13 13:27:31.867352 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:27:31.867360 kernel: tsc: Detected 2794.748 MHz processor Dec 13 13:27:31.867367 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:27:31.867375 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:27:31.867382 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 13:27:31.867393 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:27:31.867400 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:27:31.867408 kernel: Using GB pages for direct mapping Dec 13 13:27:31.867415 kernel: ACPI: Early table checksum verification disabled Dec 13 13:27:31.867422 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 13:27:31.867430 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:31.867437 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:31.867444 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:31.867452 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 13:27:31.867461 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:31.867468 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:31.867476 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:31.867483 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:31.867490 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 13:27:31.867498 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 13:27:31.867508 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 13:27:31.867526 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 13:27:31.867534 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 13:27:31.867541 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 13:27:31.867549 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 13:27:31.867556 kernel: No NUMA configuration found Dec 13 13:27:31.867563 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 13:27:31.867571 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 13:27:31.867581 kernel: Zone ranges: Dec 13 13:27:31.867588 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:27:31.867595 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 13:27:31.867603 kernel: Normal empty Dec 13 13:27:31.867610 kernel: Movable zone start for each node Dec 13 13:27:31.867618 kernel: Early memory node ranges Dec 13 13:27:31.867625 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:27:31.867632 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 13:27:31.867640 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 13:27:31.867649 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:27:31.867657 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:27:31.867664 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 13:27:31.867672 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:27:31.867679 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:27:31.867687 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:27:31.867694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:27:31.867702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:27:31.867709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:27:31.867716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:27:31.867726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:27:31.867734 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:27:31.867741 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 13:27:31.867748 kernel: TSC deadline timer available Dec 13 13:27:31.867756 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 13:27:31.867763 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:27:31.867771 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 13:27:31.867778 kernel: kvm-guest: setup PV sched yield Dec 13 13:27:31.867786 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 13:27:31.867795 kernel: Booting paravirtualized kernel on KVM Dec 13 13:27:31.867803 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:27:31.867811 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 13:27:31.867818 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 13:27:31.867826 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 13:27:31.867833 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 13:27:31.867840 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:27:31.867848 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:27:31.867856 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:31.867867 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:27:31.867874 kernel: random: crng init done Dec 13 13:27:31.867882 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:27:31.867889 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:27:31.867897 kernel: Fallback order for Node 0: 0 Dec 13 13:27:31.867904 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 13:27:31.867912 kernel: Policy zone: DMA32 Dec 13 13:27:31.867919 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:27:31.867929 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 138948K reserved, 0K cma-reserved) Dec 13 13:27:31.867937 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:27:31.867944 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:27:31.867951 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:27:31.867959 kernel: Dynamic Preempt: voluntary Dec 13 13:27:31.867966 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:27:31.867977 kernel: rcu: RCU event tracing is enabled. Dec 13 13:27:31.867985 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:27:31.867993 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:27:31.868002 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:27:31.868010 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:27:31.868018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:27:31.868025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:27:31.868032 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 13:27:31.868040 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:27:31.868047 kernel: Console: colour VGA+ 80x25 Dec 13 13:27:31.868055 kernel: printk: console [ttyS0] enabled Dec 13 13:27:31.868062 kernel: ACPI: Core revision 20230628 Dec 13 13:27:31.868072 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 13:27:31.868080 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:27:31.868087 kernel: x2apic enabled Dec 13 13:27:31.868094 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:27:31.868102 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 13:27:31.868110 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 13:27:31.868117 kernel: kvm-guest: setup PV IPIs Dec 13 13:27:31.868134 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:27:31.868142 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:27:31.868150 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 13:27:31.868158 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:27:31.868165 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 13:27:31.868175 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 13:27:31.868183 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:27:31.868191 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:27:31.868199 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:27:31.868206 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:27:31.868216 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 13:27:31.868224 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 13:27:31.868232 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:27:31.868250 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:27:31.868276 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 13:27:31.868325 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 13:27:31.868337 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 13:27:31.868347 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:27:31.868361 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:27:31.868372 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:27:31.868382 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:27:31.868395 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:27:31.868406 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:27:31.868416 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:27:31.868425 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:27:31.868433 kernel: landlock: Up and running. Dec 13 13:27:31.868440 kernel: SELinux: Initializing. Dec 13 13:27:31.868451 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:27:31.868459 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:27:31.868467 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 13:27:31.868475 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:27:31.868483 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:27:31.868491 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:27:31.868498 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 13:27:31.868506 kernel: ... version: 0 Dec 13 13:27:31.868520 kernel: ... bit width: 48 Dec 13 13:27:31.868531 kernel: ... generic registers: 6 Dec 13 13:27:31.868540 kernel: ... value mask: 0000ffffffffffff Dec 13 13:27:31.868549 kernel: ... max period: 00007fffffffffff Dec 13 13:27:31.868558 kernel: ... fixed-purpose events: 0 Dec 13 13:27:31.868567 kernel: ... event mask: 000000000000003f Dec 13 13:27:31.868574 kernel: signal: max sigframe size: 1776 Dec 13 13:27:31.868582 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:27:31.868590 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:27:31.868598 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:27:31.868608 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:27:31.868615 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 13:27:31.868623 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:27:31.868631 kernel: smpboot: Max logical packages: 1 Dec 13 13:27:31.868638 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 13:27:31.868646 kernel: devtmpfs: initialized Dec 13 13:27:31.868654 kernel: x86/mm: Memory block size: 128MB Dec 13 13:27:31.868662 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:27:31.868669 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:27:31.868679 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:27:31.868687 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:27:31.868695 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:27:31.868703 kernel: audit: type=2000 audit(1734096451.823:1): state=initialized audit_enabled=0 res=1 Dec 13 13:27:31.868710 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:27:31.868718 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:27:31.868726 kernel: cpuidle: using governor menu Dec 13 13:27:31.868733 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:27:31.868741 kernel: dca service started, version 1.12.1 Dec 13 13:27:31.868751 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 13:27:31.868759 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 13:27:31.868767 kernel: PCI: Using configuration type 1 for base access Dec 13 13:27:31.868775 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:27:31.868783 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:27:31.868790 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:27:31.868798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:27:31.868806 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:27:31.868813 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:27:31.868823 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:27:31.868831 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:27:31.868839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:27:31.868847 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:27:31.868854 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:27:31.868862 kernel: ACPI: Interpreter enabled Dec 13 13:27:31.868870 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:27:31.868877 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:27:31.868885 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:27:31.868895 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:27:31.868903 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:27:31.868911 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:27:31.869088 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:27:31.869281 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 13:27:31.869431 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 13:27:31.869443 kernel: PCI host bridge to bus 0000:00 Dec 13 13:27:31.869579 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:27:31.869691 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:27:31.869800 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:27:31.869908 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 13:27:31.870017 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:27:31.870125 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 13:27:31.870233 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:27:31.870393 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:27:31.870532 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 13:27:31.870655 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 13:27:31.870776 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 13:27:31.870897 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 13:27:31.871015 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:27:31.871149 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:27:31.871276 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 13:27:31.871445 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 13:27:31.871611 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 13:27:31.871774 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:27:31.871929 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:27:31.872083 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 13:27:31.872236 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 13:27:31.872443 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:27:31.872621 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 13:27:31.872775 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 13:27:31.872930 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 13:27:31.873084 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 13:27:31.873244 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:27:31.873430 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:27:31.873607 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:27:31.873763 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 13:27:31.873915 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 13:27:31.874078 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:27:31.874231 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 13:27:31.874246 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:27:31.874262 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:27:31.874273 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:27:31.874284 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:27:31.874309 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:27:31.874320 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:27:31.874330 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:27:31.874341 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:27:31.874353 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:27:31.874364 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:27:31.874378 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:27:31.874390 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:27:31.874401 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:27:31.874412 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:27:31.874423 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:27:31.874433 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:27:31.874445 kernel: iommu: Default domain type: Translated Dec 13 13:27:31.874456 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:27:31.874467 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:27:31.874481 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:27:31.874491 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:27:31.874503 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 13:27:31.874671 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:27:31.874825 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:27:31.874978 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:27:31.874993 kernel: vgaarb: loaded Dec 13 13:27:31.875005 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 13:27:31.875020 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 13:27:31.875031 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:27:31.875043 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:27:31.875054 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:27:31.875065 kernel: pnp: PnP ACPI init Dec 13 13:27:31.875228 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 13:27:31.875245 kernel: pnp: PnP ACPI: found 6 devices Dec 13 13:27:31.875256 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:27:31.875271 kernel: NET: Registered PF_INET protocol family Dec 13 13:27:31.875282 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:27:31.875307 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:27:31.875319 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:27:31.875330 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:27:31.875342 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:27:31.875353 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:27:31.875364 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:27:31.875375 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:27:31.875390 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:27:31.875401 kernel: NET: Registered PF_XDP protocol family Dec 13 13:27:31.875556 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:27:31.875698 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:27:31.875838 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:27:31.875977 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 13:27:31.876116 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 13:27:31.876254 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 13:27:31.876273 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:27:31.876284 kernel: Initialise system trusted keyrings Dec 13 13:27:31.876321 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:27:31.876333 kernel: Key type asymmetric registered Dec 13 13:27:31.876344 kernel: Asymmetric key parser 'x509' registered Dec 13 13:27:31.876355 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:27:31.876366 kernel: io scheduler mq-deadline registered Dec 13 13:27:31.876377 kernel: io scheduler kyber registered Dec 13 13:27:31.876388 kernel: io scheduler bfq registered Dec 13 13:27:31.876399 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:27:31.876414 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:27:31.876426 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:27:31.876437 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:27:31.876448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:27:31.876459 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:27:31.876470 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:27:31.876481 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:27:31.876492 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:27:31.876667 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:27:31.876688 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:27:31.876830 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:27:31.876972 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:27:31 UTC (1734096451) Dec 13 13:27:31.877114 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 13:27:31.877128 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:27:31.877139 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:27:31.877150 kernel: Segment Routing with IPv6 Dec 13 13:27:31.877166 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:27:31.877177 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:27:31.877188 kernel: Key type dns_resolver registered Dec 13 13:27:31.877199 kernel: IPI shorthand broadcast: enabled Dec 13 13:27:31.877210 kernel: sched_clock: Marking stable (520002277, 107032496)->(669346182, -42311409) Dec 13 13:27:31.877221 kernel: registered taskstats version 1 Dec 13 13:27:31.877232 kernel: Loading compiled-in X.509 certificates Dec 13 13:27:31.877243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:27:31.877255 kernel: Key type .fscrypt registered Dec 13 13:27:31.877266 kernel: Key type fscrypt-provisioning registered Dec 13 13:27:31.877280 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:27:31.877381 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:27:31.877395 kernel: ima: No architecture policies found Dec 13 13:27:31.877406 kernel: clk: Disabling unused clocks Dec 13 13:27:31.877418 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:27:31.877429 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:27:31.877440 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:27:31.877451 kernel: Run /init as init process Dec 13 13:27:31.877467 kernel: with arguments: Dec 13 13:27:31.877478 kernel: /init Dec 13 13:27:31.877489 kernel: with environment: Dec 13 13:27:31.877499 kernel: HOME=/ Dec 13 13:27:31.877510 kernel: TERM=linux Dec 13 13:27:31.877528 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:27:31.877542 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:31.877556 systemd[1]: Detected virtualization kvm. Dec 13 13:27:31.877571 systemd[1]: Detected architecture x86-64. Dec 13 13:27:31.877583 systemd[1]: Running in initrd. Dec 13 13:27:31.877594 systemd[1]: No hostname configured, using default hostname. Dec 13 13:27:31.877606 systemd[1]: Hostname set to . Dec 13 13:27:31.877618 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:31.877630 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:27:31.877642 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:31.877654 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:31.877670 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:27:31.877696 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:31.877711 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:27:31.877723 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:27:31.877738 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:27:31.877753 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:27:31.877766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:31.877778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:31.877790 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:31.877803 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:31.877815 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:31.877827 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:31.877840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:31.877855 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:31.877867 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:27:31.877879 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:27:31.877891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:31.877904 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:31.877916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:31.877928 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:31.877940 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:27:31.877952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:31.877967 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:27:31.877980 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:27:31.877992 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:31.878004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:31.878016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:31.878028 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:31.878040 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:31.878051 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:27:31.878108 systemd-journald[194]: Collecting audit messages is disabled. Dec 13 13:27:31.878145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:31.878161 systemd-journald[194]: Journal started Dec 13 13:27:31.878190 systemd-journald[194]: Runtime Journal (/run/log/journal/3a1218673aa14bb985c8fd0dcf707e05) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:27:31.867135 systemd-modules-load[195]: Inserted module 'overlay' Dec 13 13:27:31.903101 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:31.903131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:27:31.903144 kernel: Bridge firewalling registered Dec 13 13:27:31.894929 systemd-modules-load[195]: Inserted module 'br_netfilter' Dec 13 13:27:31.914680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:31.915376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:31.920229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:31.922173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:31.925765 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:31.930691 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:31.934934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:31.935719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:31.942409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:31.944667 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:31.946090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:31.949932 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:27:31.955223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:31.964857 dracut-cmdline[230]: dracut-dracut-053 Dec 13 13:27:31.968493 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:31.983365 systemd-resolved[228]: Positive Trust Anchors: Dec 13 13:27:31.983383 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:31.983421 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:31.986276 systemd-resolved[228]: Defaulting to hostname 'linux'. Dec 13 13:27:31.987338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:31.992790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:32.062335 kernel: SCSI subsystem initialized Dec 13 13:27:32.071322 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:27:32.082327 kernel: iscsi: registered transport (tcp) Dec 13 13:27:32.103425 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:27:32.103475 kernel: QLogic iSCSI HBA Driver Dec 13 13:27:32.145907 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:32.159431 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:27:32.182367 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:27:32.182428 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:27:32.183406 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:27:32.223316 kernel: raid6: avx2x4 gen() 30430 MB/s Dec 13 13:27:32.240315 kernel: raid6: avx2x2 gen() 30862 MB/s Dec 13 13:27:32.257390 kernel: raid6: avx2x1 gen() 25999 MB/s Dec 13 13:27:32.257408 kernel: raid6: using algorithm avx2x2 gen() 30862 MB/s Dec 13 13:27:32.275394 kernel: raid6: .... xor() 19998 MB/s, rmw enabled Dec 13 13:27:32.275411 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:27:32.295314 kernel: xor: automatically using best checksumming function avx Dec 13 13:27:32.438323 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:27:32.449130 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:32.457455 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:32.468985 systemd-udevd[414]: Using default interface naming scheme 'v255'. Dec 13 13:27:32.473194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:32.487638 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:27:32.499070 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Dec 13 13:27:32.527163 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:32.542433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:32.603836 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:32.617701 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:27:32.627144 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:32.630709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:32.635261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:32.641373 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:27:32.635713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:32.644337 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 13:27:32.670088 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:27:32.670236 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:27:32.670248 kernel: AES CTR mode by8 optimization enabled Dec 13 13:27:32.670259 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:27:32.670275 kernel: GPT:9289727 != 19775487 Dec 13 13:27:32.670286 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:27:32.670309 kernel: GPT:9289727 != 19775487 Dec 13 13:27:32.670319 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:27:32.670330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:32.648464 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:27:32.655121 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:32.655254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:32.657494 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:32.659938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:32.660068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:32.661437 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:32.683729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:32.687428 kernel: libata version 3.00 loaded. Dec 13 13:27:32.687280 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:32.698337 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:27:32.723844 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:27:32.723874 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:27:32.724062 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:27:32.724238 kernel: scsi host0: ahci Dec 13 13:27:32.724451 kernel: scsi host1: ahci Dec 13 13:27:32.724640 kernel: scsi host2: ahci Dec 13 13:27:32.724832 kernel: scsi host3: ahci Dec 13 13:27:32.725014 kernel: scsi host4: ahci Dec 13 13:27:32.725190 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (469) Dec 13 13:27:32.725206 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Dec 13 13:27:32.725220 kernel: scsi host5: ahci Dec 13 13:27:32.725425 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 13:27:32.725440 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 13:27:32.725454 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 13:27:32.725468 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 13:27:32.725494 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 13:27:32.725509 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 13:27:32.729821 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:27:32.754519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:32.759847 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:27:32.768637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:32.772795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:27:32.773194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:27:32.786452 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:27:32.788838 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:32.798253 disk-uuid[567]: Primary Header is updated. Dec 13 13:27:32.798253 disk-uuid[567]: Secondary Entries is updated. Dec 13 13:27:32.798253 disk-uuid[567]: Secondary Header is updated. Dec 13 13:27:32.802341 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:32.807321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:32.812751 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:33.031321 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:33.031407 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:33.032335 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:33.033317 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 13:27:33.034318 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:33.035323 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:33.035337 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 13:27:33.036753 kernel: ata3.00: applying bridge limits Dec 13 13:27:33.036768 kernel: ata3.00: configured for UDMA/100 Dec 13 13:27:33.037321 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 13:27:33.080862 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 13:27:33.092873 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:27:33.092886 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:27:33.808206 disk-uuid[569]: The operation has completed successfully. Dec 13 13:27:33.809568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:33.837975 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:27:33.838092 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:27:33.858459 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:27:33.861362 sh[592]: Success Dec 13 13:27:33.872342 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 13:27:33.904328 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:27:33.917636 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:27:33.919852 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:27:33.931753 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:27:33.931782 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:33.931793 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:27:33.932766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:27:33.933499 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:27:33.937833 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:27:33.938746 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:27:33.948415 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:27:33.950488 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:27:33.959757 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:33.959785 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:33.959797 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:33.963347 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:33.971309 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:27:33.974312 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:33.983572 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:27:33.993492 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:27:34.042851 ignition[693]: Ignition 2.20.0 Dec 13 13:27:34.042862 ignition[693]: Stage: fetch-offline Dec 13 13:27:34.042900 ignition[693]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:34.042909 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:34.042988 ignition[693]: parsed url from cmdline: "" Dec 13 13:27:34.042992 ignition[693]: no config URL provided Dec 13 13:27:34.042997 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:34.043005 ignition[693]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:34.043031 ignition[693]: op(1): [started] loading QEMU firmware config module Dec 13 13:27:34.043036 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:27:34.055899 ignition[693]: op(1): [finished] loading QEMU firmware config module Dec 13 13:27:34.058320 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:34.069428 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:34.091452 systemd-networkd[781]: lo: Link UP Dec 13 13:27:34.091461 systemd-networkd[781]: lo: Gained carrier Dec 13 13:27:34.093365 systemd-networkd[781]: Enumeration completed Dec 13 13:27:34.093645 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:34.093846 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:34.093851 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:34.095456 systemd-networkd[781]: eth0: Link UP Dec 13 13:27:34.095460 systemd-networkd[781]: eth0: Gained carrier Dec 13 13:27:34.095469 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:34.096154 systemd[1]: Reached target network.target - Network. Dec 13 13:27:34.107892 ignition[693]: parsing config with SHA512: 0e746118b021e8d667308537a699f63238f7625d9556f7043b607d2bc5b7ebc0b686ecdd3bb7f36e28ec04c95218707f94d074d7612a9e74e6bb5bbf25b5e53c Dec 13 13:27:34.111877 unknown[693]: fetched base config from "system" Dec 13 13:27:34.111887 unknown[693]: fetched user config from "qemu" Dec 13 13:27:34.112289 ignition[693]: fetch-offline: fetch-offline passed Dec 13 13:27:34.112482 ignition[693]: Ignition finished successfully Dec 13 13:27:34.114696 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:34.116050 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:27:34.120349 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:27:34.123487 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:27:34.136047 ignition[784]: Ignition 2.20.0 Dec 13 13:27:34.136057 ignition[784]: Stage: kargs Dec 13 13:27:34.136207 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:34.136217 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:34.136967 ignition[784]: kargs: kargs passed Dec 13 13:27:34.137008 ignition[784]: Ignition finished successfully Dec 13 13:27:34.141914 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:27:34.159493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:27:34.170056 ignition[795]: Ignition 2.20.0 Dec 13 13:27:34.170065 ignition[795]: Stage: disks Dec 13 13:27:34.170222 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:34.170233 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:34.171198 ignition[795]: disks: disks passed Dec 13 13:27:34.171246 ignition[795]: Ignition finished successfully Dec 13 13:27:34.176425 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:27:34.177084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:34.178752 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:27:34.180896 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:34.183171 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:34.185016 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:34.197470 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:27:34.209692 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:27:34.216323 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:27:34.230391 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:27:34.311328 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:27:34.312072 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:27:34.313067 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:34.319369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:34.321242 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:27:34.322739 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:27:34.322788 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:27:34.334395 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Dec 13 13:27:34.334429 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:34.334444 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:34.334458 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:34.322814 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:34.329213 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:27:34.339284 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:34.335291 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:27:34.339744 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:34.370356 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:27:34.373961 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:27:34.377643 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:27:34.381001 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:27:34.454461 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:34.462433 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:27:34.465534 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:27:34.470328 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:34.490096 ignition[926]: INFO : Ignition 2.20.0 Dec 13 13:27:34.490096 ignition[926]: INFO : Stage: mount Dec 13 13:27:34.492634 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:34.492634 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:34.490537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:27:34.496237 ignition[926]: INFO : mount: mount passed Dec 13 13:27:34.496987 ignition[926]: INFO : Ignition finished successfully Dec 13 13:27:34.499532 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:27:34.507406 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:27:34.931129 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:27:34.943493 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:34.950324 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Dec 13 13:27:34.950362 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:34.952039 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:34.952060 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:34.955384 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:34.956635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:34.975262 ignition[957]: INFO : Ignition 2.20.0 Dec 13 13:27:34.975262 ignition[957]: INFO : Stage: files Dec 13 13:27:34.977258 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:34.977258 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:34.977258 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:27:34.977258 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:27:34.977258 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:27:34.983920 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:27:34.983920 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:27:34.983920 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:27:34.983920 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:27:34.983920 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:27:34.979628 unknown[957]: wrote ssh authorized keys file for user: core Dec 13 13:27:35.017338 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:27:35.094822 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:35.097028 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 13:27:35.598639 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:27:35.856430 systemd-networkd[781]: eth0: Gained IPv6LL Dec 13 13:27:35.965631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:35.965631 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:27:35.969122 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:35.971291 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:35.971291 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:27:35.971291 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 13:27:35.975691 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:27:35.975691 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:27:35.975691 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 13:27:35.975691 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:27:35.997460 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:27:36.001827 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:27:36.003464 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:27:36.003464 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:36.006251 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:36.007692 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:36.009458 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:36.009458 ignition[957]: INFO : files: files passed Dec 13 13:27:36.011892 ignition[957]: INFO : Ignition finished successfully Dec 13 13:27:36.013853 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:27:36.020476 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:27:36.022794 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:27:36.024081 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:27:36.024204 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:27:36.037745 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:27:36.041440 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:36.041440 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:36.044514 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:36.048109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:36.048718 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:27:36.057438 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:27:36.079499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:27:36.079614 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:27:36.080115 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:27:36.083180 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:27:36.083710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:27:36.084444 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:27:36.102655 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:36.117475 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:27:36.129430 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:36.129912 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:36.130245 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:27:36.130764 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:27:36.130862 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:36.136018 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:27:36.136372 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:27:36.136848 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:27:36.137176 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:36.137675 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:36.138006 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:27:36.138344 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:36.138849 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:27:36.139170 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:27:36.139669 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:27:36.139971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:27:36.140072 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:36.140800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:36.141161 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:36.141622 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:27:36.141714 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:36.141963 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:27:36.142063 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:36.165864 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:27:36.165967 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:36.166267 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:27:36.166692 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:27:36.168699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:36.172311 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:27:36.172784 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:27:36.173114 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:27:36.173200 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:36.177823 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:27:36.177903 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:36.180113 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:27:36.180217 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:36.181857 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:27:36.181957 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:27:36.197425 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:27:36.199383 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:27:36.199797 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:27:36.199906 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:36.200195 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:27:36.200286 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:36.208266 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:27:36.208805 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:27:36.211452 ignition[1013]: INFO : Ignition 2.20.0 Dec 13 13:27:36.211452 ignition[1013]: INFO : Stage: umount Dec 13 13:27:36.211452 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:36.211452 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:36.211452 ignition[1013]: INFO : umount: umount passed Dec 13 13:27:36.211452 ignition[1013]: INFO : Ignition finished successfully Dec 13 13:27:36.211910 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:27:36.212037 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:27:36.212835 systemd[1]: Stopped target network.target - Network. Dec 13 13:27:36.214223 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:27:36.214274 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:27:36.216241 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:27:36.216333 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:27:36.218894 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:27:36.218943 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:27:36.219209 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:27:36.219253 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:36.236524 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:27:36.237095 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:27:36.240954 systemd-networkd[781]: eth0: DHCPv6 lease lost Dec 13 13:27:36.243676 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:27:36.243826 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:27:36.246541 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:27:36.246663 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:27:36.250043 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:27:36.250131 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:36.258391 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:27:36.259479 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:27:36.259543 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:36.262236 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:27:36.262309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:36.264881 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:27:36.264937 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:36.267676 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:27:36.267726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:36.269367 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:36.273282 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:27:36.282095 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:27:36.282241 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:27:36.287086 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:27:36.287261 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:36.289604 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:27:36.289654 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:36.291569 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:27:36.291605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:36.293549 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:27:36.293597 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:36.295812 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:27:36.295859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:36.297739 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:36.297785 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:36.310441 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:27:36.311652 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:27:36.311705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:36.314378 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:27:36.314423 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:36.315846 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:27:36.315891 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:36.318444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:36.318490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:36.321084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:27:36.321184 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:27:36.449429 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:27:36.449550 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:27:36.451573 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:27:36.453220 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:27:36.453270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:36.466610 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:27:36.473853 systemd[1]: Switching root. Dec 13 13:27:36.506926 systemd-journald[194]: Journal stopped Dec 13 13:27:37.688005 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Dec 13 13:27:37.688074 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:27:37.688102 kernel: SELinux: policy capability open_perms=1 Dec 13 13:27:37.688114 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:27:37.688127 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:27:37.688138 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:27:37.688155 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:27:37.688167 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:27:37.688178 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:27:37.688189 kernel: audit: type=1403 audit(1734096456.991:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:27:37.688207 systemd[1]: Successfully loaded SELinux policy in 39.544ms. Dec 13 13:27:37.688225 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.117ms. Dec 13 13:27:37.688238 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:37.688250 systemd[1]: Detected virtualization kvm. Dec 13 13:27:37.688262 systemd[1]: Detected architecture x86-64. Dec 13 13:27:37.688274 systemd[1]: Detected first boot. Dec 13 13:27:37.688286 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:37.688315 zram_generator::config[1058]: No configuration found. Dec 13 13:27:37.688329 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:27:37.688344 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:27:37.688357 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:27:37.688369 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:37.688382 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:27:37.688394 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:27:37.688406 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:27:37.688419 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:27:37.688431 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:27:37.688443 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:27:37.688458 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:27:37.688470 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:27:37.688482 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:37.688494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:37.688506 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:27:37.688519 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:27:37.688531 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:27:37.688548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:37.688560 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:27:37.688575 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:37.688587 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:27:37.688598 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:27:37.688611 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:37.688623 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:27:37.688635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:37.688647 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:37.688666 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:37.688678 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:37.688691 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:27:37.688703 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:27:37.688715 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:37.688728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:37.688739 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:37.688752 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:27:37.688764 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:27:37.688776 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:27:37.688790 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:27:37.688803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:37.688815 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:27:37.688827 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:27:37.688839 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:27:37.688852 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:27:37.688864 systemd[1]: Reached target machines.target - Containers. Dec 13 13:27:37.688876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:27:37.688890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:37.688903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:37.688915 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:27:37.688927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:37.688938 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:37.688952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:37.688963 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:27:37.688976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:37.688990 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:27:37.689003 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:27:37.689014 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:27:37.689026 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:27:37.689038 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:27:37.689050 kernel: loop: module loaded Dec 13 13:27:37.689061 kernel: fuse: init (API version 7.39) Dec 13 13:27:37.689073 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:37.689086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:37.689098 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:27:37.689112 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:27:37.689141 systemd-journald[1128]: Collecting audit messages is disabled. Dec 13 13:27:37.689163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:37.689175 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:27:37.689187 systemd-journald[1128]: Journal started Dec 13 13:27:37.689212 systemd-journald[1128]: Runtime Journal (/run/log/journal/3a1218673aa14bb985c8fd0dcf707e05) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:27:37.474761 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:27:37.491093 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:27:37.491538 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:27:37.690508 systemd[1]: Stopped verity-setup.service. Dec 13 13:27:37.694742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:37.696535 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:37.697269 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:27:37.698586 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:27:37.699876 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:27:37.701315 kernel: ACPI: bus type drm_connector registered Dec 13 13:27:37.701564 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:27:37.702844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:27:37.704064 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:27:37.705310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:27:37.706827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:37.708375 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:27:37.708541 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:27:37.710023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:37.710186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:37.711637 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:37.711803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:37.713501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:37.713666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:37.715258 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:27:37.715438 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:27:37.716812 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:37.716974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:37.718362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:37.719759 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:27:37.721276 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:27:37.736189 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:27:37.748396 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:27:37.750590 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:27:37.751761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:27:37.751794 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:37.753722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:27:37.755971 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:27:37.760423 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:27:37.762253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:37.764434 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:27:37.768208 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:27:37.769597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:37.771507 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:27:37.772789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:37.776445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:37.782417 systemd-journald[1128]: Time spent on flushing to /var/log/journal/3a1218673aa14bb985c8fd0dcf707e05 is 16.373ms for 950 entries. Dec 13 13:27:37.782417 systemd-journald[1128]: System Journal (/var/log/journal/3a1218673aa14bb985c8fd0dcf707e05) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:27:37.815542 systemd-journald[1128]: Received client request to flush runtime journal. Dec 13 13:27:37.815576 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 13:27:37.781171 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:27:37.784596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:37.789820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:37.793781 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:27:37.795329 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:27:37.797206 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:27:37.798819 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:27:37.806165 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:27:37.813904 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:27:37.819820 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:27:37.821898 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:27:37.824478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:37.834180 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:27:37.837651 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 13 13:27:37.837667 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 13 13:27:37.841411 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:27:37.844920 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:27:37.845629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:37.847453 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:27:37.857499 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:27:37.872317 kernel: loop1: detected capacity change from 0 to 138184 Dec 13 13:27:37.883678 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:27:37.891601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:37.908350 kernel: loop2: detected capacity change from 0 to 141000 Dec 13 13:27:37.908467 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 13:27:37.908487 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 13:27:37.914970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:37.937323 kernel: loop3: detected capacity change from 0 to 205544 Dec 13 13:27:37.946317 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:27:37.959318 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:27:37.972152 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:27:37.972773 (sd-merge)[1201]: Merged extensions into '/usr'. Dec 13 13:27:37.978135 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:27:37.978151 systemd[1]: Reloading... Dec 13 13:27:38.029511 zram_generator::config[1226]: No configuration found. Dec 13 13:27:38.095833 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:27:38.161291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:38.209661 systemd[1]: Reloading finished in 230 ms. Dec 13 13:27:38.248134 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:27:38.249694 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:27:38.262479 systemd[1]: Starting ensure-sysext.service... Dec 13 13:27:38.264631 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:38.272043 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:27:38.272059 systemd[1]: Reloading... Dec 13 13:27:38.292095 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:27:38.292801 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:27:38.293877 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:27:38.294232 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 13 13:27:38.294398 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 13 13:27:38.324413 zram_generator::config[1292]: No configuration found. Dec 13 13:27:38.338885 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:38.338900 systemd-tmpfiles[1265]: Skipping /boot Dec 13 13:27:38.350899 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:38.350916 systemd-tmpfiles[1265]: Skipping /boot Dec 13 13:27:38.431502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:38.480897 systemd[1]: Reloading finished in 208 ms. Dec 13 13:27:38.501445 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:27:38.515767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:38.524790 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:38.527249 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:27:38.529674 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:27:38.534336 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:38.538489 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:38.541559 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:27:38.545443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:38.545614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:38.552337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:38.563977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:38.567872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:38.569178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:38.571578 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:27:38.572147 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Dec 13 13:27:38.572672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:38.573992 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:27:38.575736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:38.575894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:38.577686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:38.577848 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:38.579829 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:38.579993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:38.591947 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:27:38.592900 augenrules[1364]: No rules Dec 13 13:27:38.594031 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:38.594250 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:38.598695 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:38.598881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:38.610646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:38.614538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:38.616871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:38.618456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:38.620730 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:27:38.621788 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:38.622673 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:38.624749 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:27:38.627528 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:27:38.629935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:38.632447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:38.634095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:38.634278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:38.636779 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:38.636954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:38.639764 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:27:38.657329 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1394) Dec 13 13:27:38.661321 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1394) Dec 13 13:27:38.665881 systemd[1]: Finished ensure-sysext.service. Dec 13 13:27:38.671632 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:27:38.675403 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1396) Dec 13 13:27:38.679976 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:38.687474 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:38.688791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:38.690176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:38.693273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:38.695719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:38.705462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:38.707506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:38.710468 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:38.714936 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:27:38.716788 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:27:38.716815 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:38.717458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:38.719350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:38.721021 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:38.721176 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:38.722759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:38.722926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:38.724697 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:38.724851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:38.734352 augenrules[1412]: /sbin/augenrules: No change Dec 13 13:27:38.742436 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:27:38.742605 augenrules[1441]: No rules Dec 13 13:27:38.747832 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:38.748266 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:38.752154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:38.752232 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:38.766317 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:27:38.772311 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:27:38.788941 systemd-resolved[1334]: Positive Trust Anchors: Dec 13 13:27:38.788959 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:38.788992 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:38.807086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:38.807412 systemd-resolved[1334]: Defaulting to hostname 'linux'. Dec 13 13:27:38.809103 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:38.810364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:38.837458 systemd-networkd[1424]: lo: Link UP Dec 13 13:27:38.837469 systemd-networkd[1424]: lo: Gained carrier Dec 13 13:27:38.839025 systemd-networkd[1424]: Enumeration completed Dec 13 13:27:38.848594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:27:38.850675 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:27:38.852123 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:38.856413 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:38.856543 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:38.858435 systemd-networkd[1424]: eth0: Link UP Dec 13 13:27:38.858536 systemd-networkd[1424]: eth0: Gained carrier Dec 13 13:27:38.858632 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:38.859027 systemd[1]: Reached target network.target - Network. Dec 13 13:27:38.860727 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:27:38.873386 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:27:38.873774 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:27:38.874878 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:27:38.875020 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:27:38.873235 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:27:38.875864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:27:38.882343 kernel: kvm_amd: TSC scaling supported Dec 13 13:27:38.882370 kernel: kvm_amd: Nested Virtualization enabled Dec 13 13:27:38.882384 kernel: kvm_amd: Nested Paging enabled Dec 13 13:27:38.882396 kernel: kvm_amd: LBR virtualization supported Dec 13 13:27:38.884404 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:27:38.884552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:38.885870 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 13:27:38.885894 kernel: kvm_amd: Virtual GIF supported Dec 13 13:27:38.886091 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Dec 13 13:27:39.765040 systemd-resolved[1334]: Clock change detected. Flushing caches. Dec 13 13:27:39.765135 systemd-timesyncd[1425]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:27:39.765210 systemd-timesyncd[1425]: Initial clock synchronization to Fri 2024-12-13 13:27:39.764970 UTC. Dec 13 13:27:39.783892 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:27:39.833304 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:27:39.847956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:39.862059 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:27:39.870249 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:39.900823 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:27:39.902339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:39.903463 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:39.904603 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:27:39.905848 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:27:39.907252 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:27:39.908395 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:27:39.909615 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:27:39.910817 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:27:39.910853 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:39.911736 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:39.913451 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:27:39.916022 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:27:39.924978 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:27:39.927206 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:27:39.928754 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:27:39.929912 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:39.930870 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:39.931816 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:39.931864 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:39.932787 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:27:39.935030 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:27:39.939251 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:27:39.943034 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:39.943176 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:27:39.944276 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:27:39.946767 jq[1470]: false Dec 13 13:27:39.947367 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:27:39.950371 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:27:39.956014 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:27:39.960626 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:27:39.966151 extend-filesystems[1471]: Found loop3 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found loop4 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found loop5 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found sr0 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda1 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda2 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda3 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found usr Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda4 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda6 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda7 Dec 13 13:27:39.971230 extend-filesystems[1471]: Found vda9 Dec 13 13:27:39.971230 extend-filesystems[1471]: Checking size of /dev/vda9 Dec 13 13:27:39.969667 dbus-daemon[1469]: [system] SELinux support is enabled Dec 13 13:27:39.975029 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:27:39.976637 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:27:39.977310 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:27:39.981016 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:27:39.983821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:27:39.985996 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:27:39.990794 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:27:39.993313 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:27:39.995759 extend-filesystems[1471]: Resized partition /dev/vda9 Dec 13 13:27:39.993565 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:27:39.994256 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:27:39.994493 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:27:39.998478 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:27:39.998727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:27:40.000399 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:27:40.006427 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:27:40.007415 jq[1490]: true Dec 13 13:27:40.011847 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1374) Dec 13 13:27:40.015670 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:27:40.018400 update_engine[1487]: I20241213 13:27:40.017120 1487 main.cc:92] Flatcar Update Engine starting Dec 13 13:27:40.021050 update_engine[1487]: I20241213 13:27:40.020574 1487 update_check_scheduler.cc:74] Next update check in 11m43s Dec 13 13:27:40.024614 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:27:40.024653 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:27:40.025165 jq[1499]: true Dec 13 13:27:40.026441 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:27:40.026467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:27:40.030814 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:27:40.037499 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:27:40.039228 systemd-logind[1484]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:27:40.039258 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:27:40.043157 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:27:40.042617 systemd-logind[1484]: New seat seat0. Dec 13 13:27:40.068628 tar[1494]: linux-amd64/helm Dec 13 13:27:40.043475 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:27:40.071064 extend-filesystems[1493]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:27:40.071064 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:27:40.071064 extend-filesystems[1493]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:27:40.090569 extend-filesystems[1471]: Resized filesystem in /dev/vda9 Dec 13 13:27:40.091943 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:27:40.072879 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:27:40.092137 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:27:40.073094 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:27:40.094208 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:27:40.097710 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:27:40.101506 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:27:40.104957 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:27:40.115100 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:27:40.122188 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:27:40.122436 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:27:40.131150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:27:40.141077 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:27:40.149168 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:27:40.151816 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:27:40.153488 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:27:40.226381 containerd[1498]: time="2024-12-13T13:27:40.226307587Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:27:40.248353 containerd[1498]: time="2024-12-13T13:27:40.248245516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:40.249955 containerd[1498]: time="2024-12-13T13:27:40.249918835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:40.249955 containerd[1498]: time="2024-12-13T13:27:40.249945194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:27:40.250009 containerd[1498]: time="2024-12-13T13:27:40.249968819Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:27:40.250160 containerd[1498]: time="2024-12-13T13:27:40.250131995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:27:40.250160 containerd[1498]: time="2024-12-13T13:27:40.250152824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250244 containerd[1498]: time="2024-12-13T13:27:40.250217194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250244 containerd[1498]: time="2024-12-13T13:27:40.250239606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250444 containerd[1498]: time="2024-12-13T13:27:40.250413923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250444 containerd[1498]: time="2024-12-13T13:27:40.250432408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250485 containerd[1498]: time="2024-12-13T13:27:40.250445863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250485 containerd[1498]: time="2024-12-13T13:27:40.250456373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250565 containerd[1498]: time="2024-12-13T13:27:40.250544909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250821 containerd[1498]: time="2024-12-13T13:27:40.250791732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250948 containerd[1498]: time="2024-12-13T13:27:40.250919492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:40.250948 containerd[1498]: time="2024-12-13T13:27:40.250937095Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:27:40.251056 containerd[1498]: time="2024-12-13T13:27:40.251029197Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:27:40.251103 containerd[1498]: time="2024-12-13T13:27:40.251087537Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:27:40.257381 containerd[1498]: time="2024-12-13T13:27:40.257309828Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:27:40.257381 containerd[1498]: time="2024-12-13T13:27:40.257358900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:27:40.257381 containerd[1498]: time="2024-12-13T13:27:40.257374359Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:27:40.257490 containerd[1498]: time="2024-12-13T13:27:40.257389227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:27:40.257490 containerd[1498]: time="2024-12-13T13:27:40.257403153Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:27:40.257544 containerd[1498]: time="2024-12-13T13:27:40.257529190Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:27:40.257743 containerd[1498]: time="2024-12-13T13:27:40.257728313Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:27:40.257869 containerd[1498]: time="2024-12-13T13:27:40.257854340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:27:40.257889 containerd[1498]: time="2024-12-13T13:27:40.257872904Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:27:40.257909 containerd[1498]: time="2024-12-13T13:27:40.257886750Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:27:40.257909 containerd[1498]: time="2024-12-13T13:27:40.257900065Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.257944 containerd[1498]: time="2024-12-13T13:27:40.257912398Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.257944 containerd[1498]: time="2024-12-13T13:27:40.257923990Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.257989 containerd[1498]: time="2024-12-13T13:27:40.257955289Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.257989 containerd[1498]: time="2024-12-13T13:27:40.257969455Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.257989 containerd[1498]: time="2024-12-13T13:27:40.257981508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.258040 containerd[1498]: time="2024-12-13T13:27:40.257993581Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.258040 containerd[1498]: time="2024-12-13T13:27:40.258004692Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:27:40.258040 containerd[1498]: time="2024-12-13T13:27:40.258023447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258040 containerd[1498]: time="2024-12-13T13:27:40.258036000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258113 containerd[1498]: time="2024-12-13T13:27:40.258050678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258113 containerd[1498]: time="2024-12-13T13:27:40.258063301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258113 containerd[1498]: time="2024-12-13T13:27:40.258074813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258113 containerd[1498]: time="2024-12-13T13:27:40.258087286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258113 containerd[1498]: time="2024-12-13T13:27:40.258098558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258113 containerd[1498]: time="2024-12-13T13:27:40.258110310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258122803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258137721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258148902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258159422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258175562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258189658Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258207682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258221 containerd[1498]: time="2024-12-13T13:27:40.258219815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258230415Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258274207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258289075Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258298763Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258309844Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258319292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258331725Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258341684Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:27:40.258360 containerd[1498]: time="2024-12-13T13:27:40.258351923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:27:40.258643 containerd[1498]: time="2024-12-13T13:27:40.258607863Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:27:40.258808 containerd[1498]: time="2024-12-13T13:27:40.258649711Z" level=info msg="Connect containerd service" Dec 13 13:27:40.258808 containerd[1498]: time="2024-12-13T13:27:40.258679117Z" level=info msg="using legacy CRI server" Dec 13 13:27:40.258808 containerd[1498]: time="2024-12-13T13:27:40.258685979Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:27:40.258808 containerd[1498]: time="2024-12-13T13:27:40.258787971Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:27:40.259374 containerd[1498]: time="2024-12-13T13:27:40.259350887Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:27:40.259609 containerd[1498]: time="2024-12-13T13:27:40.259512991Z" level=info msg="Start subscribing containerd event" Dec 13 13:27:40.259609 containerd[1498]: time="2024-12-13T13:27:40.259557674Z" level=info msg="Start recovering state" Dec 13 13:27:40.259667 containerd[1498]: time="2024-12-13T13:27:40.259642554Z" level=info msg="Start event monitor" Dec 13 13:27:40.259667 containerd[1498]: time="2024-12-13T13:27:40.259654426Z" level=info msg="Start snapshots syncer" Dec 13 13:27:40.259667 containerd[1498]: time="2024-12-13T13:27:40.259662361Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:27:40.259722 containerd[1498]: time="2024-12-13T13:27:40.259670737Z" level=info msg="Start streaming server" Dec 13 13:27:40.259936 containerd[1498]: time="2024-12-13T13:27:40.259845935Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:27:40.259936 containerd[1498]: time="2024-12-13T13:27:40.259900147Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:27:40.260043 containerd[1498]: time="2024-12-13T13:27:40.260027977Z" level=info msg="containerd successfully booted in 0.034796s" Dec 13 13:27:40.260109 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:27:40.409547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:27:40.411890 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:47146.service - OpenSSH per-connection server daemon (10.0.0.1:47146). Dec 13 13:27:40.416991 tar[1494]: linux-amd64/LICENSE Dec 13 13:27:40.417063 tar[1494]: linux-amd64/README.md Dec 13 13:27:40.429881 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:27:40.459471 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 47146 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:40.461338 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:40.470436 systemd-logind[1484]: New session 1 of user core. Dec 13 13:27:40.471735 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:27:40.481178 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:27:40.494255 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:27:40.518095 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:27:40.522082 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:27:40.639175 systemd[1566]: Queued start job for default target default.target. Dec 13 13:27:40.660233 systemd[1566]: Created slice app.slice - User Application Slice. Dec 13 13:27:40.660259 systemd[1566]: Reached target paths.target - Paths. Dec 13 13:27:40.660273 systemd[1566]: Reached target timers.target - Timers. Dec 13 13:27:40.661878 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:27:40.674121 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:27:40.674285 systemd[1566]: Reached target sockets.target - Sockets. Dec 13 13:27:40.674304 systemd[1566]: Reached target basic.target - Basic System. Dec 13 13:27:40.674353 systemd[1566]: Reached target default.target - Main User Target. Dec 13 13:27:40.674393 systemd[1566]: Startup finished in 145ms. Dec 13 13:27:40.674865 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:27:40.695048 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:27:40.760119 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:47152.service - OpenSSH per-connection server daemon (10.0.0.1:47152). Dec 13 13:27:40.798589 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 47152 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:40.799740 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:40.803508 systemd-logind[1484]: New session 2 of user core. Dec 13 13:27:40.811939 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:27:40.864552 sshd[1579]: Connection closed by 10.0.0.1 port 47152 Dec 13 13:27:40.864913 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:40.878206 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:47152.service: Deactivated successfully. Dec 13 13:27:40.879672 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:27:40.880850 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:27:40.881986 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:47162.service - OpenSSH per-connection server daemon (10.0.0.1:47162). Dec 13 13:27:40.884032 systemd-logind[1484]: Removed session 2. Dec 13 13:27:40.918247 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 47162 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:40.919446 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:40.922904 systemd-logind[1484]: New session 3 of user core. Dec 13 13:27:40.929933 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:27:40.956973 systemd-networkd[1424]: eth0: Gained IPv6LL Dec 13 13:27:40.960006 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:27:40.961714 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:27:40.975046 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:27:40.977794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:40.979901 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:27:40.986862 sshd[1586]: Connection closed by 10.0.0.1 port 47162 Dec 13 13:27:40.987230 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:40.990160 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:27:40.990361 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:47162.service: Deactivated successfully. Dec 13 13:27:40.991968 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:27:40.995569 systemd-logind[1484]: Removed session 3. Dec 13 13:27:41.000535 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:27:41.000758 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:27:41.002304 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:27:41.003547 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:27:41.582525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:41.584157 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:27:41.585408 systemd[1]: Startup finished in 650ms (kernel) + 5.300s (initrd) + 3.756s (userspace) = 9.707s. Dec 13 13:27:41.594456 agetty[1553]: failed to open credentials directory Dec 13 13:27:41.608123 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:41.615996 agetty[1551]: failed to open credentials directory Dec 13 13:27:41.992805 kubelet[1612]: E1213 13:27:41.992666 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:41.996548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:41.996802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:51.000764 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:47976.service - OpenSSH per-connection server daemon (10.0.0.1:47976). Dec 13 13:27:51.038258 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 47976 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:51.039635 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:51.043197 systemd-logind[1484]: New session 4 of user core. Dec 13 13:27:51.052941 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:27:51.106175 sshd[1627]: Connection closed by 10.0.0.1 port 47976 Dec 13 13:27:51.106625 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:51.121330 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:47976.service: Deactivated successfully. Dec 13 13:27:51.123125 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:27:51.124533 systemd-logind[1484]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:27:51.125805 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:47980.service - OpenSSH per-connection server daemon (10.0.0.1:47980). Dec 13 13:27:51.126484 systemd-logind[1484]: Removed session 4. Dec 13 13:27:51.162371 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 47980 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:51.163788 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:51.167467 systemd-logind[1484]: New session 5 of user core. Dec 13 13:27:51.178993 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:27:51.231116 sshd[1634]: Connection closed by 10.0.0.1 port 47980 Dec 13 13:27:51.231648 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:51.242850 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:47980.service: Deactivated successfully. Dec 13 13:27:51.244407 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:27:51.246063 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:27:51.247583 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:47992.service - OpenSSH per-connection server daemon (10.0.0.1:47992). Dec 13 13:27:51.248428 systemd-logind[1484]: Removed session 5. Dec 13 13:27:51.287310 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 47992 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:51.289260 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:51.294783 systemd-logind[1484]: New session 6 of user core. Dec 13 13:27:51.308052 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:27:51.365785 sshd[1641]: Connection closed by 10.0.0.1 port 47992 Dec 13 13:27:51.366155 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:51.388207 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:47992.service: Deactivated successfully. Dec 13 13:27:51.390082 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:27:51.391733 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:27:51.401463 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:47994.service - OpenSSH per-connection server daemon (10.0.0.1:47994). Dec 13 13:27:51.402672 systemd-logind[1484]: Removed session 6. Dec 13 13:27:51.436353 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 47994 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:51.437795 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:51.441871 systemd-logind[1484]: New session 7 of user core. Dec 13 13:27:51.453959 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:27:51.584976 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:27:51.585412 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:51.599675 sudo[1649]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:51.601524 sshd[1648]: Connection closed by 10.0.0.1 port 47994 Dec 13 13:27:51.601918 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:51.620522 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:47994.service: Deactivated successfully. Dec 13 13:27:51.622904 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:27:51.624506 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:27:51.626144 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:48008.service - OpenSSH per-connection server daemon (10.0.0.1:48008). Dec 13 13:27:51.627003 systemd-logind[1484]: Removed session 7. Dec 13 13:27:51.665075 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 48008 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:51.667283 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:51.671636 systemd-logind[1484]: New session 8 of user core. Dec 13 13:27:51.687014 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:27:51.742707 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:27:51.743155 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:51.747171 sudo[1658]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:51.753534 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:27:51.753892 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:51.777152 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:51.812407 augenrules[1680]: No rules Dec 13 13:27:51.814453 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:51.814695 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:51.816204 sudo[1657]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:51.818107 sshd[1656]: Connection closed by 10.0.0.1 port 48008 Dec 13 13:27:51.818519 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:51.833609 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:48008.service: Deactivated successfully. Dec 13 13:27:51.835802 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:27:51.837871 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:27:51.839345 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:48022.service - OpenSSH per-connection server daemon (10.0.0.1:48022). Dec 13 13:27:51.840250 systemd-logind[1484]: Removed session 8. Dec 13 13:27:51.877611 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 48022 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:27:51.879137 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:51.883707 systemd-logind[1484]: New session 9 of user core. Dec 13 13:27:51.892988 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:27:51.946223 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:27:51.946567 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:52.209448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:52.221105 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:27:52.221199 (dockerd)[1711]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:27:52.222379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:52.380222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:52.385357 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:52.420560 kubelet[1725]: E1213 13:27:52.420514 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:52.426650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:52.426862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:52.462505 dockerd[1711]: time="2024-12-13T13:27:52.462377817Z" level=info msg="Starting up" Dec 13 13:27:53.476414 dockerd[1711]: time="2024-12-13T13:27:53.476227089Z" level=info msg="Loading containers: start." Dec 13 13:27:53.693862 kernel: Initializing XFRM netlink socket Dec 13 13:27:53.796622 systemd-networkd[1424]: docker0: Link UP Dec 13 13:27:53.864944 dockerd[1711]: time="2024-12-13T13:27:53.864883868Z" level=info msg="Loading containers: done." Dec 13 13:27:53.889329 dockerd[1711]: time="2024-12-13T13:27:53.889235013Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:27:53.889569 dockerd[1711]: time="2024-12-13T13:27:53.889383612Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:27:53.889569 dockerd[1711]: time="2024-12-13T13:27:53.889556096Z" level=info msg="Daemon has completed initialization" Dec 13 13:27:53.931866 dockerd[1711]: time="2024-12-13T13:27:53.931787530Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:27:53.932041 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:27:54.704849 containerd[1498]: time="2024-12-13T13:27:54.704775154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 13:27:55.647997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650030881.mount: Deactivated successfully. Dec 13 13:27:56.721747 containerd[1498]: time="2024-12-13T13:27:56.721670262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:56.722465 containerd[1498]: time="2024-12-13T13:27:56.722383079Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Dec 13 13:27:56.724243 containerd[1498]: time="2024-12-13T13:27:56.724193175Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:56.727460 containerd[1498]: time="2024-12-13T13:27:56.727423263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:56.728934 containerd[1498]: time="2024-12-13T13:27:56.728895625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.02406643s" Dec 13 13:27:56.728999 containerd[1498]: time="2024-12-13T13:27:56.728936221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 13:27:56.730678 containerd[1498]: time="2024-12-13T13:27:56.730645006Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 13:27:59.782879 containerd[1498]: time="2024-12-13T13:27:59.782789407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:59.783893 containerd[1498]: time="2024-12-13T13:27:59.783802888Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Dec 13 13:27:59.785503 containerd[1498]: time="2024-12-13T13:27:59.785453233Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:59.789807 containerd[1498]: time="2024-12-13T13:27:59.789758509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:59.790802 containerd[1498]: time="2024-12-13T13:27:59.790755609Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 3.060062913s" Dec 13 13:27:59.790873 containerd[1498]: time="2024-12-13T13:27:59.790801145Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 13:27:59.791375 containerd[1498]: time="2024-12-13T13:27:59.791345145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 13:28:01.126537 containerd[1498]: time="2024-12-13T13:28:01.126469998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:01.127311 containerd[1498]: time="2024-12-13T13:28:01.127243579Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Dec 13 13:28:01.128573 containerd[1498]: time="2024-12-13T13:28:01.128506578Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:01.131641 containerd[1498]: time="2024-12-13T13:28:01.131597976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:01.132704 containerd[1498]: time="2024-12-13T13:28:01.132677752Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.34129694s" Dec 13 13:28:01.132752 containerd[1498]: time="2024-12-13T13:28:01.132705664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 13:28:01.133200 containerd[1498]: time="2024-12-13T13:28:01.133172119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 13:28:02.574620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:28:02.581065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:02.740518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:02.745048 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:02.785871 kubelet[2002]: E1213 13:28:02.785802 2002 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:02.789046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:02.789281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:02.894757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820416170.mount: Deactivated successfully. Dec 13 13:28:04.250538 containerd[1498]: time="2024-12-13T13:28:04.250468676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:04.251299 containerd[1498]: time="2024-12-13T13:28:04.251257235Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 13:28:04.252385 containerd[1498]: time="2024-12-13T13:28:04.252354614Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:04.254345 containerd[1498]: time="2024-12-13T13:28:04.254311544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:04.255084 containerd[1498]: time="2024-12-13T13:28:04.255055179Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 3.121852883s" Dec 13 13:28:04.255084 containerd[1498]: time="2024-12-13T13:28:04.255082540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 13:28:04.255537 containerd[1498]: time="2024-12-13T13:28:04.255514230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:28:04.823334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044616627.mount: Deactivated successfully. Dec 13 13:28:05.950930 containerd[1498]: time="2024-12-13T13:28:05.950853522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:05.992511 containerd[1498]: time="2024-12-13T13:28:05.992426591Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 13:28:06.054048 containerd[1498]: time="2024-12-13T13:28:06.053975379Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.114077 containerd[1498]: time="2024-12-13T13:28:06.113989258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.115008 containerd[1498]: time="2024-12-13T13:28:06.114950892Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.859391938s" Dec 13 13:28:06.115055 containerd[1498]: time="2024-12-13T13:28:06.115008089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:28:06.115855 containerd[1498]: time="2024-12-13T13:28:06.115608585Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 13:28:06.705144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686680496.mount: Deactivated successfully. Dec 13 13:28:06.710733 containerd[1498]: time="2024-12-13T13:28:06.710695659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.711453 containerd[1498]: time="2024-12-13T13:28:06.711414888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 13 13:28:06.712469 containerd[1498]: time="2024-12-13T13:28:06.712427066Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.714413 containerd[1498]: time="2024-12-13T13:28:06.714387594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.715092 containerd[1498]: time="2024-12-13T13:28:06.715066908Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.420561ms" Dec 13 13:28:06.715140 containerd[1498]: time="2024-12-13T13:28:06.715092736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 13:28:06.715569 containerd[1498]: time="2024-12-13T13:28:06.715541168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 13:28:07.569091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829560712.mount: Deactivated successfully. Dec 13 13:28:11.714656 containerd[1498]: time="2024-12-13T13:28:11.714565749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:11.715344 containerd[1498]: time="2024-12-13T13:28:11.715258829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Dec 13 13:28:11.716743 containerd[1498]: time="2024-12-13T13:28:11.716677249Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:11.720872 containerd[1498]: time="2024-12-13T13:28:11.720807907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:11.721914 containerd[1498]: time="2024-12-13T13:28:11.721847627Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.006255694s" Dec 13 13:28:11.721914 containerd[1498]: time="2024-12-13T13:28:11.721903763Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 13:28:13.039618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:28:13.047986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:13.185670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:13.190202 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:13.224398 kubelet[2147]: E1213 13:28:13.224339 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:13.228312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:13.228539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:13.593384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:13.604068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:13.628669 systemd[1]: Reloading requested from client PID 2163 ('systemctl') (unit session-9.scope)... Dec 13 13:28:13.628684 systemd[1]: Reloading... Dec 13 13:28:13.712219 zram_generator::config[2208]: No configuration found. Dec 13 13:28:14.352388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:14.461607 systemd[1]: Reloading finished in 832 ms. Dec 13 13:28:14.526741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:14.530945 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:14.531991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:14.532417 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:28:14.532751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:14.546318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:14.686317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:14.691098 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:14.729662 kubelet[2253]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:14.729662 kubelet[2253]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:28:14.729662 kubelet[2253]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:14.730107 kubelet[2253]: I1213 13:28:14.729715 2253 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:28:15.010582 kubelet[2253]: I1213 13:28:15.010461 2253 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 13:28:15.010582 kubelet[2253]: I1213 13:28:15.010499 2253 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:28:15.010874 kubelet[2253]: I1213 13:28:15.010809 2253 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 13:28:15.030231 kubelet[2253]: I1213 13:28:15.030178 2253 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:15.031074 kubelet[2253]: E1213 13:28:15.030973 2253 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:15.039738 kubelet[2253]: E1213 13:28:15.039701 2253 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 13:28:15.039738 kubelet[2253]: I1213 13:28:15.039728 2253 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 13:28:15.045475 kubelet[2253]: I1213 13:28:15.045440 2253 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:28:15.046417 kubelet[2253]: I1213 13:28:15.046394 2253 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 13:28:15.046597 kubelet[2253]: I1213 13:28:15.046560 2253 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:28:15.046762 kubelet[2253]: I1213 13:28:15.046591 2253 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 13:28:15.046888 kubelet[2253]: I1213 13:28:15.046762 2253 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:28:15.046888 kubelet[2253]: I1213 13:28:15.046771 2253 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 13:28:15.046935 kubelet[2253]: I1213 13:28:15.046899 2253 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:15.048195 kubelet[2253]: I1213 13:28:15.048164 2253 kubelet.go:408] "Attempting to sync node with API server" Dec 13 13:28:15.048195 kubelet[2253]: I1213 13:28:15.048187 2253 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:28:15.048298 kubelet[2253]: I1213 13:28:15.048223 2253 kubelet.go:314] "Adding apiserver pod source" Dec 13 13:28:15.048298 kubelet[2253]: I1213 13:28:15.048239 2253 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:28:15.051849 kubelet[2253]: W1213 13:28:15.051411 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:15.051849 kubelet[2253]: E1213 13:28:15.051462 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:15.054513 kubelet[2253]: I1213 13:28:15.053126 2253 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:28:15.054850 kubelet[2253]: I1213 13:28:15.054812 2253 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:28:15.055083 kubelet[2253]: W1213 13:28:15.055046 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:15.055130 kubelet[2253]: E1213 13:28:15.055089 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:15.055584 kubelet[2253]: W1213 13:28:15.055559 2253 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:28:15.056188 kubelet[2253]: I1213 13:28:15.056176 2253 server.go:1269] "Started kubelet" Dec 13 13:28:15.056317 kubelet[2253]: I1213 13:28:15.056256 2253 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:28:15.058639 kubelet[2253]: I1213 13:28:15.056618 2253 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:28:15.058639 kubelet[2253]: I1213 13:28:15.056756 2253 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:28:15.058639 kubelet[2253]: I1213 13:28:15.057657 2253 server.go:460] "Adding debug handlers to kubelet server" Dec 13 13:28:15.058639 kubelet[2253]: I1213 13:28:15.058460 2253 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:28:15.058639 kubelet[2253]: I1213 13:28:15.058586 2253 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 13:28:15.058819 kubelet[2253]: I1213 13:28:15.058662 2253 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 13:28:15.058819 kubelet[2253]: I1213 13:28:15.058732 2253 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 13:28:15.058819 kubelet[2253]: I1213 13:28:15.058778 2253 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:28:15.059042 kubelet[2253]: W1213 13:28:15.059003 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:15.059094 kubelet[2253]: E1213 13:28:15.059046 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:15.059214 kubelet[2253]: E1213 13:28:15.059194 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:15.059368 kubelet[2253]: E1213 13:28:15.059343 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Dec 13 13:28:15.061461 kubelet[2253]: E1213 13:28:15.059805 2253 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf931a12625f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:28:15.056151135 +0000 UTC m=+0.361059815,LastTimestamp:2024-12-13 13:28:15.056151135 +0000 UTC m=+0.361059815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:28:15.062370 kubelet[2253]: E1213 13:28:15.062353 2253 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:28:15.062640 kubelet[2253]: I1213 13:28:15.062621 2253 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:28:15.063429 kubelet[2253]: I1213 13:28:15.063412 2253 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:28:15.063429 kubelet[2253]: I1213 13:28:15.063426 2253 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:28:15.077660 kubelet[2253]: I1213 13:28:15.077633 2253 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:28:15.077660 kubelet[2253]: I1213 13:28:15.077653 2253 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:28:15.077805 kubelet[2253]: I1213 13:28:15.077671 2253 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:15.080568 kubelet[2253]: I1213 13:28:15.080527 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:28:15.082277 kubelet[2253]: I1213 13:28:15.082258 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:28:15.082324 kubelet[2253]: I1213 13:28:15.082297 2253 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:28:15.082324 kubelet[2253]: I1213 13:28:15.082318 2253 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 13:28:15.082390 kubelet[2253]: E1213 13:28:15.082365 2253 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:28:15.082795 kubelet[2253]: W1213 13:28:15.082772 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:15.082871 kubelet[2253]: E1213 13:28:15.082811 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:15.159585 kubelet[2253]: E1213 13:28:15.159549 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:15.182954 kubelet[2253]: E1213 13:28:15.182872 2253 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:28:15.260533 kubelet[2253]: E1213 13:28:15.260473 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:15.261116 kubelet[2253]: E1213 13:28:15.260951 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Dec 13 13:28:15.361528 kubelet[2253]: E1213 13:28:15.361458 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:15.383192 kubelet[2253]: E1213 13:28:15.383071 2253 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:28:15.462212 kubelet[2253]: E1213 13:28:15.462121 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:15.563184 kubelet[2253]: E1213 13:28:15.563000 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:15.609968 kubelet[2253]: I1213 13:28:15.609885 2253 policy_none.go:49] "None policy: Start" Dec 13 13:28:15.610993 kubelet[2253]: I1213 13:28:15.610943 2253 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:28:15.610993 kubelet[2253]: I1213 13:28:15.610997 2253 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:28:15.656150 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:28:15.671358 kubelet[2253]: E1213 13:28:15.662258 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Dec 13 13:28:15.671358 kubelet[2253]: E1213 13:28:15.663252 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:15.676848 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:28:15.680840 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:28:15.692721 kubelet[2253]: I1213 13:28:15.692685 2253 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:28:15.692939 kubelet[2253]: I1213 13:28:15.692913 2253 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 13:28:15.692972 kubelet[2253]: I1213 13:28:15.692932 2253 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:28:15.693576 kubelet[2253]: I1213 13:28:15.693144 2253 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:28:15.694773 kubelet[2253]: E1213 13:28:15.694756 2253 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:28:15.791500 systemd[1]: Created slice kubepods-burstable-pod705af2701a90f757438d0bc6ffe927c8.slice - libcontainer container kubepods-burstable-pod705af2701a90f757438d0bc6ffe927c8.slice. Dec 13 13:28:15.793846 kubelet[2253]: I1213 13:28:15.793805 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:28:15.794241 kubelet[2253]: E1213 13:28:15.794194 2253 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 13 13:28:15.802822 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 13:28:15.816726 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 13:28:15.865880 kubelet[2253]: I1213 13:28:15.865810 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/705af2701a90f757438d0bc6ffe927c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"705af2701a90f757438d0bc6ffe927c8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:15.865880 kubelet[2253]: I1213 13:28:15.865873 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/705af2701a90f757438d0bc6ffe927c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"705af2701a90f757438d0bc6ffe927c8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:15.865959 kubelet[2253]: I1213 13:28:15.865898 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/705af2701a90f757438d0bc6ffe927c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"705af2701a90f757438d0bc6ffe927c8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:15.865959 kubelet[2253]: I1213 13:28:15.865922 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:15.865959 kubelet[2253]: I1213 13:28:15.865940 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:15.866304 kubelet[2253]: I1213 13:28:15.866272 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:15.866304 kubelet[2253]: I1213 13:28:15.866300 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:28:15.866358 kubelet[2253]: I1213 13:28:15.866318 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:15.866358 kubelet[2253]: I1213 13:28:15.866338 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:15.896480 kubelet[2253]: W1213 13:28:15.896414 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:15.896545 kubelet[2253]: E1213 13:28:15.896498 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:15.996242 kubelet[2253]: I1213 13:28:15.996204 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:28:15.996641 kubelet[2253]: E1213 13:28:15.996597 2253 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 13 13:28:16.030101 kubelet[2253]: W1213 13:28:16.030059 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:16.030169 kubelet[2253]: E1213 13:28:16.030103 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:16.100581 kubelet[2253]: E1213 13:28:16.100457 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:16.100997 containerd[1498]: time="2024-12-13T13:28:16.100956749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:705af2701a90f757438d0bc6ffe927c8,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:16.115251 kubelet[2253]: E1213 13:28:16.115225 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:16.115657 containerd[1498]: time="2024-12-13T13:28:16.115612087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:16.119814 kubelet[2253]: E1213 13:28:16.119794 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:16.120122 containerd[1498]: time="2024-12-13T13:28:16.120089740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:16.271644 kubelet[2253]: W1213 13:28:16.271563 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:16.271644 kubelet[2253]: E1213 13:28:16.271642 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:16.333495 kubelet[2253]: W1213 13:28:16.333424 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Dec 13 13:28:16.333592 kubelet[2253]: E1213 13:28:16.333505 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:16.398509 kubelet[2253]: I1213 13:28:16.398393 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:28:16.398796 kubelet[2253]: E1213 13:28:16.398767 2253 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 13 13:28:16.463621 kubelet[2253]: E1213 13:28:16.463557 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Dec 13 13:28:16.682543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036492744.mount: Deactivated successfully. Dec 13 13:28:16.687992 containerd[1498]: time="2024-12-13T13:28:16.687961075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:16.692700 containerd[1498]: time="2024-12-13T13:28:16.692631159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:28:16.693725 containerd[1498]: time="2024-12-13T13:28:16.693693554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:16.695524 containerd[1498]: time="2024-12-13T13:28:16.695493916Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:16.696256 containerd[1498]: time="2024-12-13T13:28:16.696193039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:28:16.697088 containerd[1498]: time="2024-12-13T13:28:16.697057150Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:16.697908 containerd[1498]: time="2024-12-13T13:28:16.697880583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:28:16.699188 containerd[1498]: time="2024-12-13T13:28:16.699150901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:16.699978 containerd[1498]: time="2024-12-13T13:28:16.699943574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 598.809683ms" Dec 13 13:28:16.703494 containerd[1498]: time="2024-12-13T13:28:16.703463433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.755359ms" Dec 13 13:28:16.704090 containerd[1498]: time="2024-12-13T13:28:16.704059095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 583.8834ms" Dec 13 13:28:16.821474 containerd[1498]: time="2024-12-13T13:28:16.821252050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:16.821474 containerd[1498]: time="2024-12-13T13:28:16.821295584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:16.821474 containerd[1498]: time="2024-12-13T13:28:16.821305484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:16.821474 containerd[1498]: time="2024-12-13T13:28:16.821371461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:16.822183 containerd[1498]: time="2024-12-13T13:28:16.820567586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:16.822183 containerd[1498]: time="2024-12-13T13:28:16.822144917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:16.822183 containerd[1498]: time="2024-12-13T13:28:16.822155458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:16.822271 containerd[1498]: time="2024-12-13T13:28:16.822222788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:16.823182 containerd[1498]: time="2024-12-13T13:28:16.823125715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:16.823182 containerd[1498]: time="2024-12-13T13:28:16.823169389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:16.823267 containerd[1498]: time="2024-12-13T13:28:16.823183746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:16.823308 containerd[1498]: time="2024-12-13T13:28:16.823251037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:16.845988 systemd[1]: Started cri-containerd-43a95236499b81f9a952bd4e5a96e6de430c88234d409dafcb29c3b2d5d1590f.scope - libcontainer container 43a95236499b81f9a952bd4e5a96e6de430c88234d409dafcb29c3b2d5d1590f. Dec 13 13:28:16.847863 systemd[1]: Started cri-containerd-98fe6f1a58128891a087ad71da4b7ed828ce73ec5603820653000aa4a3c79fc6.scope - libcontainer container 98fe6f1a58128891a087ad71da4b7ed828ce73ec5603820653000aa4a3c79fc6. Dec 13 13:28:16.851431 systemd[1]: Started cri-containerd-19b499c66c21ec315cdf265c403a381b81cd1ba13d8844ecf1aeecfbee5086c9.scope - libcontainer container 19b499c66c21ec315cdf265c403a381b81cd1ba13d8844ecf1aeecfbee5086c9. Dec 13 13:28:16.888801 containerd[1498]: time="2024-12-13T13:28:16.888760176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:705af2701a90f757438d0bc6ffe927c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"98fe6f1a58128891a087ad71da4b7ed828ce73ec5603820653000aa4a3c79fc6\"" Dec 13 13:28:16.894306 kubelet[2253]: E1213 13:28:16.894274 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:16.896135 containerd[1498]: time="2024-12-13T13:28:16.895892993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"19b499c66c21ec315cdf265c403a381b81cd1ba13d8844ecf1aeecfbee5086c9\"" Dec 13 13:28:16.897454 kubelet[2253]: E1213 13:28:16.897428 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:16.898402 containerd[1498]: time="2024-12-13T13:28:16.898372600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"43a95236499b81f9a952bd4e5a96e6de430c88234d409dafcb29c3b2d5d1590f\"" Dec 13 13:28:16.899430 kubelet[2253]: E1213 13:28:16.899405 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:16.899677 containerd[1498]: time="2024-12-13T13:28:16.899640884Z" level=info msg="CreateContainer within sandbox \"98fe6f1a58128891a087ad71da4b7ed828ce73ec5603820653000aa4a3c79fc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:28:16.899789 containerd[1498]: time="2024-12-13T13:28:16.899766687Z" level=info msg="CreateContainer within sandbox \"19b499c66c21ec315cdf265c403a381b81cd1ba13d8844ecf1aeecfbee5086c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:28:16.900936 containerd[1498]: time="2024-12-13T13:28:16.900907944Z" level=info msg="CreateContainer within sandbox \"43a95236499b81f9a952bd4e5a96e6de430c88234d409dafcb29c3b2d5d1590f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:28:17.200784 kubelet[2253]: I1213 13:28:17.200759 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:28:17.201190 kubelet[2253]: E1213 13:28:17.201139 2253 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 13 13:28:17.202746 kubelet[2253]: E1213 13:28:17.202714 2253 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:17.915039 containerd[1498]: time="2024-12-13T13:28:17.914976578Z" level=info msg="CreateContainer within sandbox \"98fe6f1a58128891a087ad71da4b7ed828ce73ec5603820653000aa4a3c79fc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e4c79f67f603b7824187d40eb57827996b927b525d1a0c458477e6cf7c5f638\"" Dec 13 13:28:17.915644 containerd[1498]: time="2024-12-13T13:28:17.915593599Z" level=info msg="StartContainer for \"7e4c79f67f603b7824187d40eb57827996b927b525d1a0c458477e6cf7c5f638\"" Dec 13 13:28:17.948028 systemd[1]: Started cri-containerd-7e4c79f67f603b7824187d40eb57827996b927b525d1a0c458477e6cf7c5f638.scope - libcontainer container 7e4c79f67f603b7824187d40eb57827996b927b525d1a0c458477e6cf7c5f638. Dec 13 13:28:17.962011 containerd[1498]: time="2024-12-13T13:28:17.961968175Z" level=info msg="CreateContainer within sandbox \"43a95236499b81f9a952bd4e5a96e6de430c88234d409dafcb29c3b2d5d1590f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de8217e1e5e74db1e9c9f86700c3944c19f6a392a689e43bde903a59369e8597\"" Dec 13 13:28:17.963554 containerd[1498]: time="2024-12-13T13:28:17.962550600Z" level=info msg="StartContainer for \"de8217e1e5e74db1e9c9f86700c3944c19f6a392a689e43bde903a59369e8597\"" Dec 13 13:28:17.990952 systemd[1]: Started cri-containerd-de8217e1e5e74db1e9c9f86700c3944c19f6a392a689e43bde903a59369e8597.scope - libcontainer container de8217e1e5e74db1e9c9f86700c3944c19f6a392a689e43bde903a59369e8597. Dec 13 13:28:18.021174 containerd[1498]: time="2024-12-13T13:28:18.021129027Z" level=info msg="CreateContainer within sandbox \"19b499c66c21ec315cdf265c403a381b81cd1ba13d8844ecf1aeecfbee5086c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c87f7cedc9dcaa828f657b6958d1c615f788ef6dc6730b7d885789c8e2db9b68\"" Dec 13 13:28:18.021269 containerd[1498]: time="2024-12-13T13:28:18.021196167Z" level=info msg="StartContainer for \"7e4c79f67f603b7824187d40eb57827996b927b525d1a0c458477e6cf7c5f638\" returns successfully" Dec 13 13:28:18.021609 containerd[1498]: time="2024-12-13T13:28:18.021583203Z" level=info msg="StartContainer for \"c87f7cedc9dcaa828f657b6958d1c615f788ef6dc6730b7d885789c8e2db9b68\"" Dec 13 13:28:18.032722 containerd[1498]: time="2024-12-13T13:28:18.032649571Z" level=info msg="StartContainer for \"de8217e1e5e74db1e9c9f86700c3944c19f6a392a689e43bde903a59369e8597\" returns successfully" Dec 13 13:28:18.053059 systemd[1]: Started cri-containerd-c87f7cedc9dcaa828f657b6958d1c615f788ef6dc6730b7d885789c8e2db9b68.scope - libcontainer container c87f7cedc9dcaa828f657b6958d1c615f788ef6dc6730b7d885789c8e2db9b68. Dec 13 13:28:18.099135 kubelet[2253]: E1213 13:28:18.099068 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.102691 containerd[1498]: time="2024-12-13T13:28:18.102642567Z" level=info msg="StartContainer for \"c87f7cedc9dcaa828f657b6958d1c615f788ef6dc6730b7d885789c8e2db9b68\" returns successfully" Dec 13 13:28:18.105945 kubelet[2253]: E1213 13:28:18.105917 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:18.802458 kubelet[2253]: I1213 13:28:18.802422 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:28:18.887800 kubelet[2253]: E1213 13:28:18.887751 2253 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:28:19.054468 kubelet[2253]: I1213 13:28:19.054345 2253 apiserver.go:52] "Watching apiserver" Dec 13 13:28:19.059858 kubelet[2253]: I1213 13:28:19.059815 2253 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 13:28:19.082153 kubelet[2253]: I1213 13:28:19.082088 2253 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 13:28:19.082153 kubelet[2253]: E1213 13:28:19.082126 2253 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 13:28:19.109107 kubelet[2253]: E1213 13:28:19.109064 2253 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:19.109518 kubelet[2253]: E1213 13:28:19.109152 2253 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 13:28:19.109518 kubelet[2253]: E1213 13:28:19.109060 2253 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:19.109518 kubelet[2253]: E1213 13:28:19.109239 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:19.109518 kubelet[2253]: E1213 13:28:19.109250 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:19.109518 kubelet[2253]: E1213 13:28:19.109312 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:20.110042 kubelet[2253]: E1213 13:28:20.110012 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:20.112277 kubelet[2253]: E1213 13:28:20.112258 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:20.113257 kubelet[2253]: E1213 13:28:20.113240 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:20.626136 systemd[1]: Reloading requested from client PID 2533 ('systemctl') (unit session-9.scope)... Dec 13 13:28:20.626150 systemd[1]: Reloading... Dec 13 13:28:20.698928 zram_generator::config[2572]: No configuration found. Dec 13 13:28:20.806630 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:20.894745 systemd[1]: Reloading finished in 268 ms. Dec 13 13:28:20.942268 kubelet[2253]: I1213 13:28:20.942216 2253 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:20.942310 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:20.968991 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:28:20.969242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:20.977164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:21.125782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:21.130248 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:21.171265 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:21.171265 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:28:21.171265 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:21.171670 kubelet[2617]: I1213 13:28:21.171253 2617 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:28:21.176636 kubelet[2617]: I1213 13:28:21.176609 2617 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 13:28:21.176636 kubelet[2617]: I1213 13:28:21.176628 2617 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:28:21.176844 kubelet[2617]: I1213 13:28:21.176811 2617 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 13:28:21.177876 kubelet[2617]: I1213 13:28:21.177823 2617 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:28:21.179519 kubelet[2617]: I1213 13:28:21.179462 2617 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:21.183597 kubelet[2617]: E1213 13:28:21.183567 2617 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 13:28:21.183597 kubelet[2617]: I1213 13:28:21.183588 2617 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 13:28:21.188344 kubelet[2617]: I1213 13:28:21.188310 2617 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:28:21.188466 kubelet[2617]: I1213 13:28:21.188444 2617 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 13:28:21.188615 kubelet[2617]: I1213 13:28:21.188575 2617 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:28:21.188787 kubelet[2617]: I1213 13:28:21.188604 2617 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 13:28:21.188787 kubelet[2617]: I1213 13:28:21.188787 2617 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:28:21.188918 kubelet[2617]: I1213 13:28:21.188797 2617 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 13:28:21.188918 kubelet[2617]: I1213 13:28:21.188845 2617 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:21.188964 kubelet[2617]: I1213 13:28:21.188951 2617 kubelet.go:408] "Attempting to sync node with API server" Dec 13 13:28:21.188964 kubelet[2617]: I1213 13:28:21.188964 2617 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:28:21.189032 kubelet[2617]: I1213 13:28:21.188997 2617 kubelet.go:314] "Adding apiserver pod source" Dec 13 13:28:21.189032 kubelet[2617]: I1213 13:28:21.189012 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:28:21.189655 kubelet[2617]: I1213 13:28:21.189439 2617 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:28:21.189870 kubelet[2617]: I1213 13:28:21.189847 2617 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:28:21.190282 kubelet[2617]: I1213 13:28:21.190239 2617 server.go:1269] "Started kubelet" Dec 13 13:28:21.192078 kubelet[2617]: I1213 13:28:21.192051 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:28:21.195312 kubelet[2617]: I1213 13:28:21.192757 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:28:21.195312 kubelet[2617]: I1213 13:28:21.193420 2617 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:28:21.195312 kubelet[2617]: I1213 13:28:21.193475 2617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:28:21.196352 kubelet[2617]: I1213 13:28:21.196336 2617 server.go:460] "Adding debug handlers to kubelet server" Dec 13 13:28:21.197670 kubelet[2617]: I1213 13:28:21.197640 2617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 13:28:21.200665 kubelet[2617]: I1213 13:28:21.200632 2617 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 13:28:21.200909 kubelet[2617]: E1213 13:28:21.200878 2617 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:28:21.201598 kubelet[2617]: I1213 13:28:21.201578 2617 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:28:21.201678 kubelet[2617]: I1213 13:28:21.201656 2617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:28:21.202191 kubelet[2617]: I1213 13:28:21.202165 2617 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 13:28:21.202606 kubelet[2617]: I1213 13:28:21.202585 2617 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:28:21.203733 kubelet[2617]: E1213 13:28:21.203695 2617 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:28:21.205098 kubelet[2617]: I1213 13:28:21.204947 2617 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:28:21.210650 kubelet[2617]: I1213 13:28:21.210579 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:28:21.214489 kubelet[2617]: I1213 13:28:21.214203 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:28:21.214489 kubelet[2617]: I1213 13:28:21.214268 2617 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:28:21.214489 kubelet[2617]: I1213 13:28:21.214298 2617 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 13:28:21.214489 kubelet[2617]: E1213 13:28:21.214340 2617 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:28:21.238816 kubelet[2617]: I1213 13:28:21.238781 2617 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:28:21.238816 kubelet[2617]: I1213 13:28:21.238798 2617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:28:21.238816 kubelet[2617]: I1213 13:28:21.238816 2617 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:21.239008 kubelet[2617]: I1213 13:28:21.238987 2617 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:28:21.239008 kubelet[2617]: I1213 13:28:21.238998 2617 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:28:21.239056 kubelet[2617]: I1213 13:28:21.239017 2617 policy_none.go:49] "None policy: Start" Dec 13 13:28:21.239507 kubelet[2617]: I1213 13:28:21.239487 2617 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:28:21.239558 kubelet[2617]: I1213 13:28:21.239511 2617 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:28:21.239668 kubelet[2617]: I1213 13:28:21.239653 2617 state_mem.go:75] "Updated machine memory state" Dec 13 13:28:21.243813 kubelet[2617]: I1213 13:28:21.243782 2617 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:28:21.244037 kubelet[2617]: I1213 13:28:21.244017 2617 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 13:28:21.244063 kubelet[2617]: I1213 13:28:21.244031 2617 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:28:21.244356 kubelet[2617]: I1213 13:28:21.244337 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:28:21.319953 kubelet[2617]: E1213 13:28:21.319915 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:21.320316 kubelet[2617]: E1213 13:28:21.320292 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 13:28:21.320316 kubelet[2617]: E1213 13:28:21.320296 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:21.348252 kubelet[2617]: I1213 13:28:21.348224 2617 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:28:21.356777 kubelet[2617]: I1213 13:28:21.356092 2617 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 13:28:21.356777 kubelet[2617]: I1213 13:28:21.356172 2617 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 13:28:21.404235 kubelet[2617]: I1213 13:28:21.404192 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:21.404235 kubelet[2617]: I1213 13:28:21.404237 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:21.404413 kubelet[2617]: I1213 13:28:21.404270 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/705af2701a90f757438d0bc6ffe927c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"705af2701a90f757438d0bc6ffe927c8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:21.404413 kubelet[2617]: I1213 13:28:21.404292 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:21.404413 kubelet[2617]: I1213 13:28:21.404310 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:21.404413 kubelet[2617]: I1213 13:28:21.404329 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:28:21.404413 kubelet[2617]: I1213 13:28:21.404347 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:28:21.404535 kubelet[2617]: I1213 13:28:21.404364 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/705af2701a90f757438d0bc6ffe927c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"705af2701a90f757438d0bc6ffe927c8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:21.404535 kubelet[2617]: I1213 13:28:21.404380 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/705af2701a90f757438d0bc6ffe927c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"705af2701a90f757438d0bc6ffe927c8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:21.621441 kubelet[2617]: E1213 13:28:21.621341 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:21.621441 kubelet[2617]: E1213 13:28:21.621414 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:21.621601 kubelet[2617]: E1213 13:28:21.621512 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:22.189725 kubelet[2617]: I1213 13:28:22.189656 2617 apiserver.go:52] "Watching apiserver" Dec 13 13:28:22.203423 kubelet[2617]: I1213 13:28:22.203344 2617 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 13:28:22.224870 kubelet[2617]: E1213 13:28:22.224425 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:22.224997 kubelet[2617]: E1213 13:28:22.224972 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:22.232594 kubelet[2617]: E1213 13:28:22.232572 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:28:22.233059 kubelet[2617]: E1213 13:28:22.232885 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:22.251330 kubelet[2617]: I1213 13:28:22.251261 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.251244076 podStartE2EDuration="2.251244076s" podCreationTimestamp="2024-12-13 13:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:22.242774672 +0000 UTC m=+1.108887393" watchObservedRunningTime="2024-12-13 13:28:22.251244076 +0000 UTC m=+1.117356797" Dec 13 13:28:22.261265 kubelet[2617]: I1213 13:28:22.261206 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.261188375 podStartE2EDuration="2.261188375s" podCreationTimestamp="2024-12-13 13:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:22.251530906 +0000 UTC m=+1.117643617" watchObservedRunningTime="2024-12-13 13:28:22.261188375 +0000 UTC m=+1.127301086" Dec 13 13:28:23.226216 kubelet[2617]: E1213 13:28:23.226185 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:23.995009 kubelet[2617]: E1213 13:28:23.994967 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:24.227535 kubelet[2617]: E1213 13:28:24.227499 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:24.879385 update_engine[1487]: I20241213 13:28:24.879325 1487 update_attempter.cc:509] Updating boot flags... Dec 13 13:28:24.907883 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2692) Dec 13 13:28:24.957854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2693) Dec 13 13:28:24.986857 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2693) Dec 13 13:28:25.627719 sudo[1691]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:25.631120 sshd[1690]: Connection closed by 10.0.0.1 port 48022 Dec 13 13:28:25.631657 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:25.636453 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:48022.service: Deactivated successfully. Dec 13 13:28:25.639425 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:28:25.639668 systemd[1]: session-9.scope: Consumed 3.830s CPU time, 151.5M memory peak, 0B memory swap peak. Dec 13 13:28:25.641637 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:28:25.642565 systemd-logind[1484]: Removed session 9. Dec 13 13:28:25.670091 kubelet[2617]: I1213 13:28:25.670065 2617 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:28:25.670523 kubelet[2617]: I1213 13:28:25.670431 2617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:28:25.670581 containerd[1498]: time="2024-12-13T13:28:25.670279057Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:28:26.274574 kubelet[2617]: I1213 13:28:26.274518 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.274501673 podStartE2EDuration="6.274501673s" podCreationTimestamp="2024-12-13 13:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:22.261422322 +0000 UTC m=+1.127535043" watchObservedRunningTime="2024-12-13 13:28:26.274501673 +0000 UTC m=+5.140614394" Dec 13 13:28:26.282717 systemd[1]: Created slice kubepods-besteffort-pod06e3c012_027c_4568_be37_29dd5884d4bf.slice - libcontainer container kubepods-besteffort-pod06e3c012_027c_4568_be37_29dd5884d4bf.slice. Dec 13 13:28:26.336942 kubelet[2617]: I1213 13:28:26.336901 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06e3c012-027c-4568-be37-29dd5884d4bf-xtables-lock\") pod \"kube-proxy-4ts98\" (UID: \"06e3c012-027c-4568-be37-29dd5884d4bf\") " pod="kube-system/kube-proxy-4ts98" Dec 13 13:28:26.336942 kubelet[2617]: I1213 13:28:26.336932 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06e3c012-027c-4568-be37-29dd5884d4bf-lib-modules\") pod \"kube-proxy-4ts98\" (UID: \"06e3c012-027c-4568-be37-29dd5884d4bf\") " pod="kube-system/kube-proxy-4ts98" Dec 13 13:28:26.336942 kubelet[2617]: I1213 13:28:26.336951 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw89h\" (UniqueName: \"kubernetes.io/projected/06e3c012-027c-4568-be37-29dd5884d4bf-kube-api-access-xw89h\") pod \"kube-proxy-4ts98\" (UID: \"06e3c012-027c-4568-be37-29dd5884d4bf\") " pod="kube-system/kube-proxy-4ts98" Dec 13 13:28:26.336942 kubelet[2617]: I1213 13:28:26.336966 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06e3c012-027c-4568-be37-29dd5884d4bf-kube-proxy\") pod \"kube-proxy-4ts98\" (UID: \"06e3c012-027c-4568-be37-29dd5884d4bf\") " pod="kube-system/kube-proxy-4ts98" Dec 13 13:28:26.592541 kubelet[2617]: E1213 13:28:26.592430 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:26.593091 containerd[1498]: time="2024-12-13T13:28:26.592812367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ts98,Uid:06e3c012-027c-4568-be37-29dd5884d4bf,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:26.623505 containerd[1498]: time="2024-12-13T13:28:26.623402376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:26.623505 containerd[1498]: time="2024-12-13T13:28:26.623455688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:26.623505 containerd[1498]: time="2024-12-13T13:28:26.623470616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:26.623680 containerd[1498]: time="2024-12-13T13:28:26.623617907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:26.672076 systemd[1]: Started cri-containerd-809475f8f93774c1c4469348a0284b388885b03f3047cd6e61b6a967ab8be765.scope - libcontainer container 809475f8f93774c1c4469348a0284b388885b03f3047cd6e61b6a967ab8be765. Dec 13 13:28:26.676431 systemd[1]: Created slice kubepods-besteffort-pod97eb936f_3f56_47d3_85f5_9b5ffb2f54e7.slice - libcontainer container kubepods-besteffort-pod97eb936f_3f56_47d3_85f5_9b5ffb2f54e7.slice. Dec 13 13:28:26.698707 containerd[1498]: time="2024-12-13T13:28:26.698337769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ts98,Uid:06e3c012-027c-4568-be37-29dd5884d4bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"809475f8f93774c1c4469348a0284b388885b03f3047cd6e61b6a967ab8be765\"" Dec 13 13:28:26.699113 kubelet[2617]: E1213 13:28:26.698954 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:26.701263 containerd[1498]: time="2024-12-13T13:28:26.701222429Z" level=info msg="CreateContainer within sandbox \"809475f8f93774c1c4469348a0284b388885b03f3047cd6e61b6a967ab8be765\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:28:26.717852 containerd[1498]: time="2024-12-13T13:28:26.717797599Z" level=info msg="CreateContainer within sandbox \"809475f8f93774c1c4469348a0284b388885b03f3047cd6e61b6a967ab8be765\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5fe7d1760d27f53729daf7dbe4e9f609cf993abbc782a7d0d06db8e5dd14cbaa\"" Dec 13 13:28:26.718327 containerd[1498]: time="2024-12-13T13:28:26.718306039Z" level=info msg="StartContainer for \"5fe7d1760d27f53729daf7dbe4e9f609cf993abbc782a7d0d06db8e5dd14cbaa\"" Dec 13 13:28:26.740045 kubelet[2617]: I1213 13:28:26.739913 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97eb936f-3f56-47d3-85f5-9b5ffb2f54e7-var-lib-calico\") pod \"tigera-operator-76c4976dd7-n6hkp\" (UID: \"97eb936f-3f56-47d3-85f5-9b5ffb2f54e7\") " pod="tigera-operator/tigera-operator-76c4976dd7-n6hkp" Dec 13 13:28:26.740045 kubelet[2617]: I1213 13:28:26.739953 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwwf\" (UniqueName: \"kubernetes.io/projected/97eb936f-3f56-47d3-85f5-9b5ffb2f54e7-kube-api-access-fjwwf\") pod \"tigera-operator-76c4976dd7-n6hkp\" (UID: \"97eb936f-3f56-47d3-85f5-9b5ffb2f54e7\") " pod="tigera-operator/tigera-operator-76c4976dd7-n6hkp" Dec 13 13:28:26.750976 systemd[1]: Started cri-containerd-5fe7d1760d27f53729daf7dbe4e9f609cf993abbc782a7d0d06db8e5dd14cbaa.scope - libcontainer container 5fe7d1760d27f53729daf7dbe4e9f609cf993abbc782a7d0d06db8e5dd14cbaa. Dec 13 13:28:26.783471 containerd[1498]: time="2024-12-13T13:28:26.783330046Z" level=info msg="StartContainer for \"5fe7d1760d27f53729daf7dbe4e9f609cf993abbc782a7d0d06db8e5dd14cbaa\" returns successfully" Dec 13 13:28:26.980310 containerd[1498]: time="2024-12-13T13:28:26.980265328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-n6hkp,Uid:97eb936f-3f56-47d3-85f5-9b5ffb2f54e7,Namespace:tigera-operator,Attempt:0,}" Dec 13 13:28:27.007313 containerd[1498]: time="2024-12-13T13:28:27.007015476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:27.007313 containerd[1498]: time="2024-12-13T13:28:27.007083696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:27.007313 containerd[1498]: time="2024-12-13T13:28:27.007100627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:27.007313 containerd[1498]: time="2024-12-13T13:28:27.007188946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:27.029077 systemd[1]: Started cri-containerd-900ce0dcd7796cf0be69a3e9111b5cd4e25a290c8e397ab69380de707136e4de.scope - libcontainer container 900ce0dcd7796cf0be69a3e9111b5cd4e25a290c8e397ab69380de707136e4de. Dec 13 13:28:27.074284 containerd[1498]: time="2024-12-13T13:28:27.074251611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-n6hkp,Uid:97eb936f-3f56-47d3-85f5-9b5ffb2f54e7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"900ce0dcd7796cf0be69a3e9111b5cd4e25a290c8e397ab69380de707136e4de\"" Dec 13 13:28:27.075995 containerd[1498]: time="2024-12-13T13:28:27.075966015Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 13:28:27.232682 kubelet[2617]: E1213 13:28:27.232567 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:27.240188 kubelet[2617]: I1213 13:28:27.240138 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4ts98" podStartSLOduration=1.240110268 podStartE2EDuration="1.240110268s" podCreationTimestamp="2024-12-13 13:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:27.240095701 +0000 UTC m=+6.106208432" watchObservedRunningTime="2024-12-13 13:28:27.240110268 +0000 UTC m=+6.106222989" Dec 13 13:28:27.450966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2401393265.mount: Deactivated successfully. Dec 13 13:28:28.216777 kubelet[2617]: E1213 13:28:28.216668 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:28.233839 kubelet[2617]: E1213 13:28:28.233785 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:28.701524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362326187.mount: Deactivated successfully. Dec 13 13:28:30.338926 containerd[1498]: time="2024-12-13T13:28:30.338877184Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:30.339780 containerd[1498]: time="2024-12-13T13:28:30.339744853Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764321" Dec 13 13:28:30.340914 containerd[1498]: time="2024-12-13T13:28:30.340885990Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:30.343201 containerd[1498]: time="2024-12-13T13:28:30.343156523Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:30.343727 containerd[1498]: time="2024-12-13T13:28:30.343700917Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.267609723s" Dec 13 13:28:30.343767 containerd[1498]: time="2024-12-13T13:28:30.343726145Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 13:28:30.345333 containerd[1498]: time="2024-12-13T13:28:30.345309071Z" level=info msg="CreateContainer within sandbox \"900ce0dcd7796cf0be69a3e9111b5cd4e25a290c8e397ab69380de707136e4de\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 13:28:30.357158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011340887.mount: Deactivated successfully. Dec 13 13:28:30.357479 containerd[1498]: time="2024-12-13T13:28:30.357373978Z" level=info msg="CreateContainer within sandbox \"900ce0dcd7796cf0be69a3e9111b5cd4e25a290c8e397ab69380de707136e4de\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9e0c64d51afbf02556fe593bbb1774e13f9f68404bd673bf9dedf5da2f198212\"" Dec 13 13:28:30.358022 containerd[1498]: time="2024-12-13T13:28:30.357999667Z" level=info msg="StartContainer for \"9e0c64d51afbf02556fe593bbb1774e13f9f68404bd673bf9dedf5da2f198212\"" Dec 13 13:28:30.389002 systemd[1]: Started cri-containerd-9e0c64d51afbf02556fe593bbb1774e13f9f68404bd673bf9dedf5da2f198212.scope - libcontainer container 9e0c64d51afbf02556fe593bbb1774e13f9f68404bd673bf9dedf5da2f198212. Dec 13 13:28:30.414374 containerd[1498]: time="2024-12-13T13:28:30.414320928Z" level=info msg="StartContainer for \"9e0c64d51afbf02556fe593bbb1774e13f9f68404bd673bf9dedf5da2f198212\" returns successfully" Dec 13 13:28:32.427256 kubelet[2617]: E1213 13:28:32.427223 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:32.433654 kubelet[2617]: I1213 13:28:32.433575 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-n6hkp" podStartSLOduration=3.164779995 podStartE2EDuration="6.433556747s" podCreationTimestamp="2024-12-13 13:28:26 +0000 UTC" firstStartedPulling="2024-12-13 13:28:27.075528803 +0000 UTC m=+5.941641524" lastFinishedPulling="2024-12-13 13:28:30.344305555 +0000 UTC m=+9.210418276" observedRunningTime="2024-12-13 13:28:31.245156191 +0000 UTC m=+10.111268912" watchObservedRunningTime="2024-12-13 13:28:32.433556747 +0000 UTC m=+11.299669478" Dec 13 13:28:33.402985 systemd[1]: Created slice kubepods-besteffort-pod963595c3_0a1f_4e97_8436_f358ae5e71ad.slice - libcontainer container kubepods-besteffort-pod963595c3_0a1f_4e97_8436_f358ae5e71ad.slice. Dec 13 13:28:33.413374 systemd[1]: Created slice kubepods-besteffort-pod1cbb2ae1_6ce0_43b2_b0e9_fc5f38b1b296.slice - libcontainer container kubepods-besteffort-pod1cbb2ae1_6ce0_43b2_b0e9_fc5f38b1b296.slice. Dec 13 13:28:33.419252 kubelet[2617]: E1213 13:28:33.418110 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:33.586129 kubelet[2617]: I1213 13:28:33.586078 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/963595c3-0a1f-4e97-8436-f358ae5e71ad-tigera-ca-bundle\") pod \"calico-typha-64b458cd4b-5bdw8\" (UID: \"963595c3-0a1f-4e97-8436-f358ae5e71ad\") " pod="calico-system/calico-typha-64b458cd4b-5bdw8" Dec 13 13:28:33.586129 kubelet[2617]: I1213 13:28:33.586127 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-var-run-calico\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586129 kubelet[2617]: I1213 13:28:33.586146 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-cni-net-dir\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586653 kubelet[2617]: I1213 13:28:33.586165 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-cni-bin-dir\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586653 kubelet[2617]: I1213 13:28:33.586189 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26v84\" (UniqueName: \"kubernetes.io/projected/169b145c-9dd2-4ef7-8f30-2acc264f69a4-kube-api-access-26v84\") pod \"csi-node-driver-sqmg9\" (UID: \"169b145c-9dd2-4ef7-8f30-2acc264f69a4\") " pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:33.586653 kubelet[2617]: I1213 13:28:33.586211 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-node-certs\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586653 kubelet[2617]: I1213 13:28:33.586233 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-var-lib-calico\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586653 kubelet[2617]: I1213 13:28:33.586256 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-lib-modules\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586771 kubelet[2617]: I1213 13:28:33.586274 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-policysync\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586771 kubelet[2617]: I1213 13:28:33.586294 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/169b145c-9dd2-4ef7-8f30-2acc264f69a4-socket-dir\") pod \"csi-node-driver-sqmg9\" (UID: \"169b145c-9dd2-4ef7-8f30-2acc264f69a4\") " pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:33.586771 kubelet[2617]: I1213 13:28:33.586315 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-cni-log-dir\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586771 kubelet[2617]: I1213 13:28:33.586362 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-flexvol-driver-host\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586771 kubelet[2617]: I1213 13:28:33.586415 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/169b145c-9dd2-4ef7-8f30-2acc264f69a4-kubelet-dir\") pod \"csi-node-driver-sqmg9\" (UID: \"169b145c-9dd2-4ef7-8f30-2acc264f69a4\") " pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:33.586922 kubelet[2617]: I1213 13:28:33.586436 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/963595c3-0a1f-4e97-8436-f358ae5e71ad-typha-certs\") pod \"calico-typha-64b458cd4b-5bdw8\" (UID: \"963595c3-0a1f-4e97-8436-f358ae5e71ad\") " pod="calico-system/calico-typha-64b458cd4b-5bdw8" Dec 13 13:28:33.586922 kubelet[2617]: I1213 13:28:33.586451 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72qdm\" (UniqueName: \"kubernetes.io/projected/963595c3-0a1f-4e97-8436-f358ae5e71ad-kube-api-access-72qdm\") pod \"calico-typha-64b458cd4b-5bdw8\" (UID: \"963595c3-0a1f-4e97-8436-f358ae5e71ad\") " pod="calico-system/calico-typha-64b458cd4b-5bdw8" Dec 13 13:28:33.586922 kubelet[2617]: I1213 13:28:33.586465 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-tigera-ca-bundle\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.586922 kubelet[2617]: I1213 13:28:33.586478 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/169b145c-9dd2-4ef7-8f30-2acc264f69a4-registration-dir\") pod \"csi-node-driver-sqmg9\" (UID: \"169b145c-9dd2-4ef7-8f30-2acc264f69a4\") " pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:33.586922 kubelet[2617]: I1213 13:28:33.586494 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-xtables-lock\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.587038 kubelet[2617]: I1213 13:28:33.586529 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zfkg\" (UniqueName: \"kubernetes.io/projected/1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296-kube-api-access-5zfkg\") pod \"calico-node-ddgwb\" (UID: \"1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296\") " pod="calico-system/calico-node-ddgwb" Dec 13 13:28:33.587038 kubelet[2617]: I1213 13:28:33.586572 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/169b145c-9dd2-4ef7-8f30-2acc264f69a4-varrun\") pod \"csi-node-driver-sqmg9\" (UID: \"169b145c-9dd2-4ef7-8f30-2acc264f69a4\") " pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:33.689917 kubelet[2617]: E1213 13:28:33.688118 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.689917 kubelet[2617]: W1213 13:28:33.688138 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.689917 kubelet[2617]: E1213 13:28:33.688169 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.689917 kubelet[2617]: E1213 13:28:33.688384 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.689917 kubelet[2617]: W1213 13:28:33.688392 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.689917 kubelet[2617]: E1213 13:28:33.688410 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.689917 kubelet[2617]: E1213 13:28:33.688598 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.689917 kubelet[2617]: W1213 13:28:33.688605 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.689917 kubelet[2617]: E1213 13:28:33.688633 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.689917 kubelet[2617]: E1213 13:28:33.688860 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.690321 kubelet[2617]: W1213 13:28:33.688868 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.690321 kubelet[2617]: E1213 13:28:33.688878 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.690321 kubelet[2617]: E1213 13:28:33.689060 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.690321 kubelet[2617]: W1213 13:28:33.689070 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.690321 kubelet[2617]: E1213 13:28:33.689085 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.690321 kubelet[2617]: E1213 13:28:33.689280 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.690321 kubelet[2617]: W1213 13:28:33.689290 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.690321 kubelet[2617]: E1213 13:28:33.689371 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.690321 kubelet[2617]: E1213 13:28:33.689708 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.690321 kubelet[2617]: W1213 13:28:33.689716 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.690956 kubelet[2617]: E1213 13:28:33.689812 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.691315 kubelet[2617]: E1213 13:28:33.691301 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.691315 kubelet[2617]: W1213 13:28:33.691311 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.691631 kubelet[2617]: E1213 13:28:33.691606 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.695874 kubelet[2617]: E1213 13:28:33.691917 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.695874 kubelet[2617]: W1213 13:28:33.691933 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.695874 kubelet[2617]: E1213 13:28:33.691974 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.695874 kubelet[2617]: E1213 13:28:33.692296 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.695874 kubelet[2617]: W1213 13:28:33.692304 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.695874 kubelet[2617]: E1213 13:28:33.692424 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.695874 kubelet[2617]: E1213 13:28:33.692502 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.695874 kubelet[2617]: W1213 13:28:33.692509 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.695874 kubelet[2617]: E1213 13:28:33.692699 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.695874 kubelet[2617]: W1213 13:28:33.692706 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.695874 kubelet[2617]: E1213 13:28:33.692878 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.696298 kubelet[2617]: W1213 13:28:33.692885 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.696298 kubelet[2617]: E1213 13:28:33.693035 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696298 kubelet[2617]: E1213 13:28:33.693045 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696298 kubelet[2617]: E1213 13:28:33.693060 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696298 kubelet[2617]: E1213 13:28:33.693124 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.696298 kubelet[2617]: W1213 13:28:33.693130 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.696298 kubelet[2617]: E1213 13:28:33.693182 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696298 kubelet[2617]: E1213 13:28:33.693543 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.696298 kubelet[2617]: W1213 13:28:33.693550 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.696298 kubelet[2617]: E1213 13:28:33.693620 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696618 kubelet[2617]: E1213 13:28:33.693730 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.696618 kubelet[2617]: W1213 13:28:33.693737 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.696618 kubelet[2617]: E1213 13:28:33.693769 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696618 kubelet[2617]: E1213 13:28:33.693928 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.696618 kubelet[2617]: W1213 13:28:33.693935 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.696618 kubelet[2617]: E1213 13:28:33.693946 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696618 kubelet[2617]: E1213 13:28:33.696488 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.696618 kubelet[2617]: W1213 13:28:33.696498 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.696618 kubelet[2617]: E1213 13:28:33.696593 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.696947 kubelet[2617]: E1213 13:28:33.696730 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.696947 kubelet[2617]: W1213 13:28:33.696738 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.696947 kubelet[2617]: E1213 13:28:33.696802 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.697269 kubelet[2617]: E1213 13:28:33.697244 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.697269 kubelet[2617]: W1213 13:28:33.697253 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.697349 kubelet[2617]: E1213 13:28:33.697320 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.697483 kubelet[2617]: E1213 13:28:33.697469 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.697483 kubelet[2617]: W1213 13:28:33.697480 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.697600 kubelet[2617]: E1213 13:28:33.697555 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.697687 kubelet[2617]: E1213 13:28:33.697677 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.697687 kubelet[2617]: W1213 13:28:33.697685 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.697821 kubelet[2617]: E1213 13:28:33.697796 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.697974 kubelet[2617]: E1213 13:28:33.697961 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.697974 kubelet[2617]: W1213 13:28:33.697971 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.698086 kubelet[2617]: E1213 13:28:33.698059 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.698222 kubelet[2617]: E1213 13:28:33.698210 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.698222 kubelet[2617]: W1213 13:28:33.698219 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.698310 kubelet[2617]: E1213 13:28:33.698291 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.698437 kubelet[2617]: E1213 13:28:33.698424 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.698437 kubelet[2617]: W1213 13:28:33.698434 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.698561 kubelet[2617]: E1213 13:28:33.698518 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.700054 kubelet[2617]: E1213 13:28:33.699669 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.700054 kubelet[2617]: W1213 13:28:33.699679 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.700054 kubelet[2617]: E1213 13:28:33.699910 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.700054 kubelet[2617]: W1213 13:28:33.699921 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.700194 kubelet[2617]: E1213 13:28:33.700178 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.700848 kubelet[2617]: E1213 13:28:33.700820 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.702736 kubelet[2617]: E1213 13:28:33.701981 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.702736 kubelet[2617]: W1213 13:28:33.701995 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.702736 kubelet[2617]: E1213 13:28:33.702007 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.707636 kubelet[2617]: E1213 13:28:33.707604 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:33.708122 containerd[1498]: time="2024-12-13T13:28:33.708071050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b458cd4b-5bdw8,Uid:963595c3-0a1f-4e97-8436-f358ae5e71ad,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:33.709861 kubelet[2617]: E1213 13:28:33.708615 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.709861 kubelet[2617]: W1213 13:28:33.708636 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.709861 kubelet[2617]: E1213 13:28:33.708648 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.716983 kubelet[2617]: E1213 13:28:33.716950 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:33.717435 containerd[1498]: time="2024-12-13T13:28:33.717393249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ddgwb,Uid:1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:33.740562 containerd[1498]: time="2024-12-13T13:28:33.734358744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:33.740562 containerd[1498]: time="2024-12-13T13:28:33.735032631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:33.740562 containerd[1498]: time="2024-12-13T13:28:33.735048661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:33.740562 containerd[1498]: time="2024-12-13T13:28:33.735176002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:33.746080 containerd[1498]: time="2024-12-13T13:28:33.745730486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:33.746080 containerd[1498]: time="2024-12-13T13:28:33.745854501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:33.746080 containerd[1498]: time="2024-12-13T13:28:33.745890499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:33.746080 containerd[1498]: time="2024-12-13T13:28:33.745971333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:33.758494 systemd[1]: Started cri-containerd-8d48b9c6d4bf15245028b121b180275d9d8420c37a1cc1deda6e6ce0da6072da.scope - libcontainer container 8d48b9c6d4bf15245028b121b180275d9d8420c37a1cc1deda6e6ce0da6072da. Dec 13 13:28:33.763233 systemd[1]: Started cri-containerd-11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2.scope - libcontainer container 11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2. Dec 13 13:28:33.788483 containerd[1498]: time="2024-12-13T13:28:33.788442979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ddgwb,Uid:1cbb2ae1-6ce0-43b2-b0e9-fc5f38b1b296,Namespace:calico-system,Attempt:0,} returns sandbox id \"11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2\"" Dec 13 13:28:33.789089 kubelet[2617]: E1213 13:28:33.789069 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:33.791151 containerd[1498]: time="2024-12-13T13:28:33.791083594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:28:33.806491 containerd[1498]: time="2024-12-13T13:28:33.806454827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b458cd4b-5bdw8,Uid:963595c3-0a1f-4e97-8436-f358ae5e71ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d48b9c6d4bf15245028b121b180275d9d8420c37a1cc1deda6e6ce0da6072da\"" Dec 13 13:28:33.807031 kubelet[2617]: E1213 13:28:33.806995 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:33.999241 kubelet[2617]: E1213 13:28:33.999214 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:34.089970 kubelet[2617]: E1213 13:28:34.089944 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.089970 kubelet[2617]: W1213 13:28:34.089966 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.090145 kubelet[2617]: E1213 13:28:34.089985 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.090244 kubelet[2617]: E1213 13:28:34.090191 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.090244 kubelet[2617]: W1213 13:28:34.090199 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.090244 kubelet[2617]: E1213 13:28:34.090209 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.090411 kubelet[2617]: E1213 13:28:34.090396 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.090411 kubelet[2617]: W1213 13:28:34.090407 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.090471 kubelet[2617]: E1213 13:28:34.090417 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.090634 kubelet[2617]: E1213 13:28:34.090618 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.090634 kubelet[2617]: W1213 13:28:34.090630 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.090741 kubelet[2617]: E1213 13:28:34.090639 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.090869 kubelet[2617]: E1213 13:28:34.090854 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.090869 kubelet[2617]: W1213 13:28:34.090866 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.090967 kubelet[2617]: E1213 13:28:34.090875 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.091075 kubelet[2617]: E1213 13:28:34.091060 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.091075 kubelet[2617]: W1213 13:28:34.091070 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.091145 kubelet[2617]: E1213 13:28:34.091080 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.091272 kubelet[2617]: E1213 13:28:34.091259 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.091272 kubelet[2617]: W1213 13:28:34.091270 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.091368 kubelet[2617]: E1213 13:28:34.091279 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.091476 kubelet[2617]: E1213 13:28:34.091461 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.091476 kubelet[2617]: W1213 13:28:34.091473 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.091545 kubelet[2617]: E1213 13:28:34.091482 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.091697 kubelet[2617]: E1213 13:28:34.091683 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.091697 kubelet[2617]: W1213 13:28:34.091694 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.091775 kubelet[2617]: E1213 13:28:34.091703 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.091910 kubelet[2617]: E1213 13:28:34.091895 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.091910 kubelet[2617]: W1213 13:28:34.091907 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.091989 kubelet[2617]: E1213 13:28:34.091916 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.092123 kubelet[2617]: E1213 13:28:34.092108 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.092123 kubelet[2617]: W1213 13:28:34.092119 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.092207 kubelet[2617]: E1213 13:28:34.092130 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.092325 kubelet[2617]: E1213 13:28:34.092312 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.092325 kubelet[2617]: W1213 13:28:34.092322 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.092392 kubelet[2617]: E1213 13:28:34.092332 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.092531 kubelet[2617]: E1213 13:28:34.092517 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.092531 kubelet[2617]: W1213 13:28:34.092527 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.092617 kubelet[2617]: E1213 13:28:34.092537 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.092742 kubelet[2617]: E1213 13:28:34.092728 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.092742 kubelet[2617]: W1213 13:28:34.092739 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.092809 kubelet[2617]: E1213 13:28:34.092749 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.092958 kubelet[2617]: E1213 13:28:34.092944 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.092958 kubelet[2617]: W1213 13:28:34.092955 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.093044 kubelet[2617]: E1213 13:28:34.092964 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.093162 kubelet[2617]: E1213 13:28:34.093148 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.093162 kubelet[2617]: W1213 13:28:34.093159 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.093235 kubelet[2617]: E1213 13:28:34.093169 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.093364 kubelet[2617]: E1213 13:28:34.093350 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.093364 kubelet[2617]: W1213 13:28:34.093361 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.093444 kubelet[2617]: E1213 13:28:34.093370 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.093560 kubelet[2617]: E1213 13:28:34.093547 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.093560 kubelet[2617]: W1213 13:28:34.093557 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.093639 kubelet[2617]: E1213 13:28:34.093567 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.093762 kubelet[2617]: E1213 13:28:34.093748 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.093762 kubelet[2617]: W1213 13:28:34.093761 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.093871 kubelet[2617]: E1213 13:28:34.093771 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.093999 kubelet[2617]: E1213 13:28:34.093981 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.093999 kubelet[2617]: W1213 13:28:34.093992 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.094076 kubelet[2617]: E1213 13:28:34.094001 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.094201 kubelet[2617]: E1213 13:28:34.094184 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.094201 kubelet[2617]: W1213 13:28:34.094194 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.094345 kubelet[2617]: E1213 13:28:34.094205 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.094381 kubelet[2617]: E1213 13:28:34.094372 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.094413 kubelet[2617]: W1213 13:28:34.094380 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.094413 kubelet[2617]: E1213 13:28:34.094390 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.094581 kubelet[2617]: E1213 13:28:34.094561 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.094581 kubelet[2617]: W1213 13:28:34.094571 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.094678 kubelet[2617]: E1213 13:28:34.094581 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.094787 kubelet[2617]: E1213 13:28:34.094767 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.094787 kubelet[2617]: W1213 13:28:34.094778 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.094881 kubelet[2617]: E1213 13:28:34.094787 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.095005 kubelet[2617]: E1213 13:28:34.094985 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.095005 kubelet[2617]: W1213 13:28:34.094996 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.095078 kubelet[2617]: E1213 13:28:34.095006 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:35.056310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120784102.mount: Deactivated successfully. Dec 13 13:28:35.130126 containerd[1498]: time="2024-12-13T13:28:35.130072799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:35.130800 containerd[1498]: time="2024-12-13T13:28:35.130746614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 13:28:35.132678 containerd[1498]: time="2024-12-13T13:28:35.132631031Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:35.175184 containerd[1498]: time="2024-12-13T13:28:35.175097707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:35.176294 containerd[1498]: time="2024-12-13T13:28:35.176216555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.38509526s" Dec 13 13:28:35.176294 containerd[1498]: time="2024-12-13T13:28:35.176258424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:28:35.177572 containerd[1498]: time="2024-12-13T13:28:35.177542054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 13:28:35.178633 containerd[1498]: time="2024-12-13T13:28:35.178598493Z" level=info msg="CreateContainer within sandbox \"11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:28:35.215320 kubelet[2617]: E1213 13:28:35.215265 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:35.483795 containerd[1498]: time="2024-12-13T13:28:35.483757007Z" level=info msg="CreateContainer within sandbox \"11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa\"" Dec 13 13:28:35.484197 containerd[1498]: time="2024-12-13T13:28:35.484161041Z" level=info msg="StartContainer for \"5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa\"" Dec 13 13:28:35.508971 systemd[1]: Started cri-containerd-5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa.scope - libcontainer container 5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa. Dec 13 13:28:35.541353 containerd[1498]: time="2024-12-13T13:28:35.541310401Z" level=info msg="StartContainer for \"5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa\" returns successfully" Dec 13 13:28:35.552903 systemd[1]: cri-containerd-5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa.scope: Deactivated successfully. Dec 13 13:28:35.593303 containerd[1498]: time="2024-12-13T13:28:35.593247825Z" level=info msg="shim disconnected" id=5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa namespace=k8s.io Dec 13 13:28:35.593303 containerd[1498]: time="2024-12-13T13:28:35.593299493Z" level=warning msg="cleaning up after shim disconnected" id=5889437f1d2889b047d8160df26be2a42af99a991a71bd93ddefec78858ac1aa namespace=k8s.io Dec 13 13:28:35.593303 containerd[1498]: time="2024-12-13T13:28:35.593310854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:36.248668 kubelet[2617]: E1213 13:28:36.248620 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:37.098790 containerd[1498]: time="2024-12-13T13:28:37.098744530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:37.099500 containerd[1498]: time="2024-12-13T13:28:37.099464712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 13:28:37.100594 containerd[1498]: time="2024-12-13T13:28:37.100560213Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:37.102445 containerd[1498]: time="2024-12-13T13:28:37.102411804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:37.103003 containerd[1498]: time="2024-12-13T13:28:37.102973556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.925282319s" Dec 13 13:28:37.103059 containerd[1498]: time="2024-12-13T13:28:37.102997061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 13:28:37.103842 containerd[1498]: time="2024-12-13T13:28:37.103808115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:28:37.112767 containerd[1498]: time="2024-12-13T13:28:37.112712041Z" level=info msg="CreateContainer within sandbox \"8d48b9c6d4bf15245028b121b180275d9d8420c37a1cc1deda6e6ce0da6072da\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 13:28:37.129270 containerd[1498]: time="2024-12-13T13:28:37.129229302Z" level=info msg="CreateContainer within sandbox \"8d48b9c6d4bf15245028b121b180275d9d8420c37a1cc1deda6e6ce0da6072da\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9cc849b0ef96cb2f1112aad3ab37283f9d3ebd5afd30d392375dc70f932685e1\"" Dec 13 13:28:37.130312 containerd[1498]: time="2024-12-13T13:28:37.129673021Z" level=info msg="StartContainer for \"9cc849b0ef96cb2f1112aad3ab37283f9d3ebd5afd30d392375dc70f932685e1\"" Dec 13 13:28:37.159066 systemd[1]: Started cri-containerd-9cc849b0ef96cb2f1112aad3ab37283f9d3ebd5afd30d392375dc70f932685e1.scope - libcontainer container 9cc849b0ef96cb2f1112aad3ab37283f9d3ebd5afd30d392375dc70f932685e1. Dec 13 13:28:37.198665 containerd[1498]: time="2024-12-13T13:28:37.198616913Z" level=info msg="StartContainer for \"9cc849b0ef96cb2f1112aad3ab37283f9d3ebd5afd30d392375dc70f932685e1\" returns successfully" Dec 13 13:28:37.215524 kubelet[2617]: E1213 13:28:37.215487 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:37.252795 kubelet[2617]: E1213 13:28:37.252118 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:38.252979 kubelet[2617]: I1213 13:28:38.252948 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:28:38.253453 kubelet[2617]: E1213 13:28:38.253237 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:39.215646 kubelet[2617]: E1213 13:28:39.215595 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:41.215077 kubelet[2617]: E1213 13:28:41.215029 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:41.496928 containerd[1498]: time="2024-12-13T13:28:41.496791238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:41.497555 containerd[1498]: time="2024-12-13T13:28:41.497503943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:28:41.498575 containerd[1498]: time="2024-12-13T13:28:41.498544426Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:41.500531 containerd[1498]: time="2024-12-13T13:28:41.500501210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:41.501187 containerd[1498]: time="2024-12-13T13:28:41.501142440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.397293158s" Dec 13 13:28:41.501187 containerd[1498]: time="2024-12-13T13:28:41.501183678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:28:41.503132 containerd[1498]: time="2024-12-13T13:28:41.503107148Z" level=info msg="CreateContainer within sandbox \"11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:28:41.517077 containerd[1498]: time="2024-12-13T13:28:41.517039855Z" level=info msg="CreateContainer within sandbox \"11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db\"" Dec 13 13:28:41.517574 containerd[1498]: time="2024-12-13T13:28:41.517539007Z" level=info msg="StartContainer for \"f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db\"" Dec 13 13:28:41.551003 systemd[1]: run-containerd-runc-k8s.io-f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db-runc.4ftGMr.mount: Deactivated successfully. Dec 13 13:28:41.560960 systemd[1]: Started cri-containerd-f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db.scope - libcontainer container f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db. Dec 13 13:28:41.590045 containerd[1498]: time="2024-12-13T13:28:41.589993798Z" level=info msg="StartContainer for \"f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db\" returns successfully" Dec 13 13:28:42.488243 kubelet[2617]: E1213 13:28:42.488175 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:42.494402 kubelet[2617]: E1213 13:28:42.494357 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:42.518692 kubelet[2617]: I1213 13:28:42.518630 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64b458cd4b-5bdw8" podStartSLOduration=6.2228538 podStartE2EDuration="9.518612636s" podCreationTimestamp="2024-12-13 13:28:33 +0000 UTC" firstStartedPulling="2024-12-13 13:28:33.807937797 +0000 UTC m=+12.674050518" lastFinishedPulling="2024-12-13 13:28:37.103696643 +0000 UTC m=+15.969809354" observedRunningTime="2024-12-13 13:28:37.265754399 +0000 UTC m=+16.131867150" watchObservedRunningTime="2024-12-13 13:28:42.518612636 +0000 UTC m=+21.384725357" Dec 13 13:28:42.710596 containerd[1498]: time="2024-12-13T13:28:42.710553736Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:28:42.714855 systemd[1]: cri-containerd-f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db.scope: Deactivated successfully. Dec 13 13:28:42.736041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db-rootfs.mount: Deactivated successfully. Dec 13 13:28:42.792060 kubelet[2617]: I1213 13:28:42.791960 2617 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 13:28:42.916385 systemd[1]: Created slice kubepods-burstable-poda034a49a_558f_4b5e_a56a_f248911e85ff.slice - libcontainer container kubepods-burstable-poda034a49a_558f_4b5e_a56a_f248911e85ff.slice. Dec 13 13:28:42.924427 systemd[1]: Created slice kubepods-besteffort-pod4fbe5ee0_22f5_48d1_ae2e_a2288a7c8d4e.slice - libcontainer container kubepods-besteffort-pod4fbe5ee0_22f5_48d1_ae2e_a2288a7c8d4e.slice. Dec 13 13:28:42.928564 systemd[1]: Created slice kubepods-burstable-podcf27bd25_cb26_447c_8c84_23dc77a1d6bc.slice - libcontainer container kubepods-burstable-podcf27bd25_cb26_447c_8c84_23dc77a1d6bc.slice. Dec 13 13:28:42.933124 systemd[1]: Created slice kubepods-besteffort-podecc6ca41_b371_43b9_9b23_fe21358ee632.slice - libcontainer container kubepods-besteffort-podecc6ca41_b371_43b9_9b23_fe21358ee632.slice. Dec 13 13:28:42.937572 systemd[1]: Created slice kubepods-besteffort-pod27f04941_3371_4139_bdc8_8fa4b3ff5199.slice - libcontainer container kubepods-besteffort-pod27f04941_3371_4139_bdc8_8fa4b3ff5199.slice. Dec 13 13:28:42.982141 kubelet[2617]: I1213 13:28:42.982106 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rwh\" (UniqueName: \"kubernetes.io/projected/a034a49a-558f-4b5e-a56a-f248911e85ff-kube-api-access-p9rwh\") pod \"coredns-6f6b679f8f-69bzb\" (UID: \"a034a49a-558f-4b5e-a56a-f248911e85ff\") " pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:42.982141 kubelet[2617]: I1213 13:28:42.982139 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vsd\" (UniqueName: \"kubernetes.io/projected/ecc6ca41-b371-43b9-9b23-fe21358ee632-kube-api-access-s2vsd\") pod \"calico-kube-controllers-6579dc67df-9c4m8\" (UID: \"ecc6ca41-b371-43b9-9b23-fe21358ee632\") " pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:42.982326 kubelet[2617]: I1213 13:28:42.982158 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxl6\" (UniqueName: \"kubernetes.io/projected/4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e-kube-api-access-knxl6\") pod \"calico-apiserver-7dc76d6748-ck574\" (UID: \"4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e\") " pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:42.982326 kubelet[2617]: I1213 13:28:42.982176 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8t77\" (UniqueName: \"kubernetes.io/projected/27f04941-3371-4139-bdc8-8fa4b3ff5199-kube-api-access-n8t77\") pod \"calico-apiserver-7dc76d6748-5xdq6\" (UID: \"27f04941-3371-4139-bdc8-8fa4b3ff5199\") " pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:42.982326 kubelet[2617]: I1213 13:28:42.982293 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf27bd25-cb26-447c-8c84-23dc77a1d6bc-config-volume\") pod \"coredns-6f6b679f8f-hx7ds\" (UID: \"cf27bd25-cb26-447c-8c84-23dc77a1d6bc\") " pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:42.982398 kubelet[2617]: I1213 13:28:42.982326 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecc6ca41-b371-43b9-9b23-fe21358ee632-tigera-ca-bundle\") pod \"calico-kube-controllers-6579dc67df-9c4m8\" (UID: \"ecc6ca41-b371-43b9-9b23-fe21358ee632\") " pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:42.982398 kubelet[2617]: I1213 13:28:42.982349 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27f04941-3371-4139-bdc8-8fa4b3ff5199-calico-apiserver-certs\") pod \"calico-apiserver-7dc76d6748-5xdq6\" (UID: \"27f04941-3371-4139-bdc8-8fa4b3ff5199\") " pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:42.982398 kubelet[2617]: I1213 13:28:42.982372 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e-calico-apiserver-certs\") pod \"calico-apiserver-7dc76d6748-ck574\" (UID: \"4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e\") " pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:42.982398 kubelet[2617]: I1213 13:28:42.982393 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q6sx\" (UniqueName: \"kubernetes.io/projected/cf27bd25-cb26-447c-8c84-23dc77a1d6bc-kube-api-access-8q6sx\") pod \"coredns-6f6b679f8f-hx7ds\" (UID: \"cf27bd25-cb26-447c-8c84-23dc77a1d6bc\") " pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:42.982528 kubelet[2617]: I1213 13:28:42.982416 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a034a49a-558f-4b5e-a56a-f248911e85ff-config-volume\") pod \"coredns-6f6b679f8f-69bzb\" (UID: \"a034a49a-558f-4b5e-a56a-f248911e85ff\") " pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:43.197797 containerd[1498]: time="2024-12-13T13:28:43.197720694Z" level=info msg="shim disconnected" id=f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db namespace=k8s.io Dec 13 13:28:43.197797 containerd[1498]: time="2024-12-13T13:28:43.197769787Z" level=warning msg="cleaning up after shim disconnected" id=f46f734610c97c86aad159ff1a9988e08f3dc65e64add900d2ed9ff6fd9a45db namespace=k8s.io Dec 13 13:28:43.197797 containerd[1498]: time="2024-12-13T13:28:43.197779265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:43.220741 kubelet[2617]: E1213 13:28:43.220688 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:43.230959 kubelet[2617]: E1213 13:28:43.230937 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:43.242175 containerd[1498]: time="2024-12-13T13:28:43.242134417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:28:43.242498 containerd[1498]: time="2024-12-13T13:28:43.242424494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:43.242786 containerd[1498]: time="2024-12-13T13:28:43.242681620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:43.242848 containerd[1498]: time="2024-12-13T13:28:43.242799151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:43.242972 containerd[1498]: time="2024-12-13T13:28:43.242939586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:28:43.399378 containerd[1498]: time="2024-12-13T13:28:43.399309477Z" level=error msg="Failed to destroy network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.402307 containerd[1498]: time="2024-12-13T13:28:43.402155305Z" level=error msg="encountered an error cleaning up failed sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.402307 containerd[1498]: time="2024-12-13T13:28:43.402246026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.402509 kubelet[2617]: E1213 13:28:43.402465 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.402573 kubelet[2617]: E1213 13:28:43.402542 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:43.402573 kubelet[2617]: E1213 13:28:43.402561 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:43.402673 kubelet[2617]: E1213 13:28:43.402642 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69bzb" podUID="a034a49a-558f-4b5e-a56a-f248911e85ff" Dec 13 13:28:43.407939 containerd[1498]: time="2024-12-13T13:28:43.407813756Z" level=error msg="Failed to destroy network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.408265 containerd[1498]: time="2024-12-13T13:28:43.408242124Z" level=error msg="encountered an error cleaning up failed sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.408322 containerd[1498]: time="2024-12-13T13:28:43.408300665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.408601 kubelet[2617]: E1213 13:28:43.408551 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.408737 containerd[1498]: time="2024-12-13T13:28:43.408709355Z" level=error msg="Failed to destroy network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.408878 kubelet[2617]: E1213 13:28:43.408825 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:43.408935 kubelet[2617]: E1213 13:28:43.408884 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:43.408966 containerd[1498]: time="2024-12-13T13:28:43.408821757Z" level=error msg="Failed to destroy network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.408992 kubelet[2617]: E1213 13:28:43.408933 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" podUID="4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e" Dec 13 13:28:43.409589 containerd[1498]: time="2024-12-13T13:28:43.409440404Z" level=error msg="encountered an error cleaning up failed sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.409589 containerd[1498]: time="2024-12-13T13:28:43.409481501Z" level=error msg="encountered an error cleaning up failed sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.409589 containerd[1498]: time="2024-12-13T13:28:43.409494195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.409589 containerd[1498]: time="2024-12-13T13:28:43.409519002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.409760 kubelet[2617]: E1213 13:28:43.409676 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.409760 kubelet[2617]: E1213 13:28:43.409706 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:43.409760 kubelet[2617]: E1213 13:28:43.409721 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:43.409875 kubelet[2617]: E1213 13:28:43.409742 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" podUID="27f04941-3371-4139-bdc8-8fa4b3ff5199" Dec 13 13:28:43.409875 kubelet[2617]: E1213 13:28:43.409675 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.409875 kubelet[2617]: E1213 13:28:43.409772 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:43.409970 kubelet[2617]: E1213 13:28:43.409784 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:43.409970 kubelet[2617]: E1213 13:28:43.409803 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hx7ds" podUID="cf27bd25-cb26-447c-8c84-23dc77a1d6bc" Dec 13 13:28:43.410054 containerd[1498]: time="2024-12-13T13:28:43.410026810Z" level=error msg="Failed to destroy network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.410361 containerd[1498]: time="2024-12-13T13:28:43.410336254Z" level=error msg="encountered an error cleaning up failed sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.410424 containerd[1498]: time="2024-12-13T13:28:43.410375427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.410527 kubelet[2617]: E1213 13:28:43.410503 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.410564 kubelet[2617]: E1213 13:28:43.410534 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:43.410564 kubelet[2617]: E1213 13:28:43.410548 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:43.410702 kubelet[2617]: E1213 13:28:43.410578 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" podUID="ecc6ca41-b371-43b9-9b23-fe21358ee632" Dec 13 13:28:43.496159 kubelet[2617]: I1213 13:28:43.496056 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a" Dec 13 13:28:43.496695 containerd[1498]: time="2024-12-13T13:28:43.496654709Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:28:43.497320 kubelet[2617]: I1213 13:28:43.496915 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f" Dec 13 13:28:43.497453 containerd[1498]: time="2024-12-13T13:28:43.497001102Z" level=info msg="Ensure that sandbox ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a in task-service has been cleanup successfully" Dec 13 13:28:43.497453 containerd[1498]: time="2024-12-13T13:28:43.497339330Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:28:43.497453 containerd[1498]: time="2024-12-13T13:28:43.497352845Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:28:43.497739 kubelet[2617]: E1213 13:28:43.497675 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:43.497806 containerd[1498]: time="2024-12-13T13:28:43.497721330Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:28:43.497945 containerd[1498]: time="2024-12-13T13:28:43.497911960Z" level=info msg="Ensure that sandbox 375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f in task-service has been cleanup successfully" Dec 13 13:28:43.498116 containerd[1498]: time="2024-12-13T13:28:43.498054529Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:28:43.498116 containerd[1498]: time="2024-12-13T13:28:43.498070630Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:28:43.498222 containerd[1498]: time="2024-12-13T13:28:43.498187249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:1,}" Dec 13 13:28:43.498654 containerd[1498]: time="2024-12-13T13:28:43.498427383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:28:43.499239 kubelet[2617]: I1213 13:28:43.498981 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd" Dec 13 13:28:43.499458 containerd[1498]: time="2024-12-13T13:28:43.499431957Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:28:43.499616 containerd[1498]: time="2024-12-13T13:28:43.499597920Z" level=info msg="Ensure that sandbox dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd in task-service has been cleanup successfully" Dec 13 13:28:43.499781 containerd[1498]: time="2024-12-13T13:28:43.499764293Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:28:43.499842 containerd[1498]: time="2024-12-13T13:28:43.499787597Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:28:43.500145 containerd[1498]: time="2024-12-13T13:28:43.500124563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:28:43.500313 kubelet[2617]: I1213 13:28:43.500291 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf" Dec 13 13:28:43.500921 containerd[1498]: time="2024-12-13T13:28:43.500657448Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:28:43.500921 containerd[1498]: time="2024-12-13T13:28:43.500798043Z" level=info msg="Ensure that sandbox 8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf in task-service has been cleanup successfully" Dec 13 13:28:43.501041 containerd[1498]: time="2024-12-13T13:28:43.501024079Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:28:43.501041 containerd[1498]: time="2024-12-13T13:28:43.501039679Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:28:43.501614 containerd[1498]: time="2024-12-13T13:28:43.501581721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:1,}" Dec 13 13:28:43.501940 kubelet[2617]: E1213 13:28:43.501915 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:43.502499 kubelet[2617]: I1213 13:28:43.502470 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e" Dec 13 13:28:43.502551 containerd[1498]: time="2024-12-13T13:28:43.502477330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:28:43.502797 containerd[1498]: time="2024-12-13T13:28:43.502767407Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:28:43.503305 containerd[1498]: time="2024-12-13T13:28:43.503203741Z" level=info msg="Ensure that sandbox 4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e in task-service has been cleanup successfully" Dec 13 13:28:43.504374 containerd[1498]: time="2024-12-13T13:28:43.503579800Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:28:43.504374 containerd[1498]: time="2024-12-13T13:28:43.503609997Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:28:43.505939 kubelet[2617]: E1213 13:28:43.505917 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:43.506185 containerd[1498]: time="2024-12-13T13:28:43.506164304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:1,}" Dec 13 13:28:43.743235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f-shm.mount: Deactivated successfully. Dec 13 13:28:43.789401 containerd[1498]: time="2024-12-13T13:28:43.789185307Z" level=error msg="Failed to destroy network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.792048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0-shm.mount: Deactivated successfully. Dec 13 13:28:43.794242 containerd[1498]: time="2024-12-13T13:28:43.794085138Z" level=error msg="encountered an error cleaning up failed sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.794242 containerd[1498]: time="2024-12-13T13:28:43.794153657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.794454 kubelet[2617]: E1213 13:28:43.794372 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.794770 kubelet[2617]: E1213 13:28:43.794648 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:43.794770 kubelet[2617]: E1213 13:28:43.794673 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:43.794770 kubelet[2617]: E1213 13:28:43.794714 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hx7ds" podUID="cf27bd25-cb26-447c-8c84-23dc77a1d6bc" Dec 13 13:28:43.812367 containerd[1498]: time="2024-12-13T13:28:43.812273812Z" level=error msg="Failed to destroy network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.814623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf-shm.mount: Deactivated successfully. Dec 13 13:28:43.817049 containerd[1498]: time="2024-12-13T13:28:43.817009863Z" level=error msg="encountered an error cleaning up failed sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.817245 containerd[1498]: time="2024-12-13T13:28:43.817225541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.817580 kubelet[2617]: E1213 13:28:43.817537 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.817633 kubelet[2617]: E1213 13:28:43.817608 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:43.817657 kubelet[2617]: E1213 13:28:43.817632 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:43.817891 kubelet[2617]: E1213 13:28:43.817722 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" podUID="27f04941-3371-4139-bdc8-8fa4b3ff5199" Dec 13 13:28:43.830424 containerd[1498]: time="2024-12-13T13:28:43.830360363Z" level=error msg="Failed to destroy network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.832888 containerd[1498]: time="2024-12-13T13:28:43.830819529Z" level=error msg="encountered an error cleaning up failed sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.832888 containerd[1498]: time="2024-12-13T13:28:43.830907724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.832657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9-shm.mount: Deactivated successfully. Dec 13 13:28:43.833018 kubelet[2617]: E1213 13:28:43.831197 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.833018 kubelet[2617]: E1213 13:28:43.831278 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:43.833018 kubelet[2617]: E1213 13:28:43.831303 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:43.833115 kubelet[2617]: E1213 13:28:43.831351 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69bzb" podUID="a034a49a-558f-4b5e-a56a-f248911e85ff" Dec 13 13:28:43.834760 containerd[1498]: time="2024-12-13T13:28:43.834717330Z" level=error msg="Failed to destroy network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.835501 containerd[1498]: time="2024-12-13T13:28:43.835463517Z" level=error msg="Failed to destroy network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.835709 containerd[1498]: time="2024-12-13T13:28:43.835512630Z" level=error msg="encountered an error cleaning up failed sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.835777 containerd[1498]: time="2024-12-13T13:28:43.835743074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.836338 kubelet[2617]: E1213 13:28:43.835924 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.836338 kubelet[2617]: E1213 13:28:43.835974 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:43.836338 kubelet[2617]: E1213 13:28:43.835994 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:43.836472 containerd[1498]: time="2024-12-13T13:28:43.836136195Z" level=error msg="encountered an error cleaning up failed sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.836472 containerd[1498]: time="2024-12-13T13:28:43.836239049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.836579 kubelet[2617]: E1213 13:28:43.836028 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" podUID="4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e" Dec 13 13:28:43.836579 kubelet[2617]: E1213 13:28:43.836410 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.836579 kubelet[2617]: E1213 13:28:43.836479 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:43.836713 kubelet[2617]: E1213 13:28:43.836495 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:43.836713 kubelet[2617]: E1213 13:28:43.836563 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" podUID="ecc6ca41-b371-43b9-9b23-fe21358ee632" Dec 13 13:28:43.837667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015-shm.mount: Deactivated successfully. Dec 13 13:28:43.837789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e-shm.mount: Deactivated successfully. Dec 13 13:28:44.221020 systemd[1]: Created slice kubepods-besteffort-pod169b145c_9dd2_4ef7_8f30_2acc264f69a4.slice - libcontainer container kubepods-besteffort-pod169b145c_9dd2_4ef7_8f30_2acc264f69a4.slice. Dec 13 13:28:44.223284 containerd[1498]: time="2024-12-13T13:28:44.223253010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:44.272279 containerd[1498]: time="2024-12-13T13:28:44.272219138Z" level=error msg="Failed to destroy network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.272591 containerd[1498]: time="2024-12-13T13:28:44.272566904Z" level=error msg="encountered an error cleaning up failed sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.272636 containerd[1498]: time="2024-12-13T13:28:44.272620014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.272878 kubelet[2617]: E1213 13:28:44.272818 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.272948 kubelet[2617]: E1213 13:28:44.272905 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:44.272948 kubelet[2617]: E1213 13:28:44.272932 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:44.273016 kubelet[2617]: E1213 13:28:44.272976 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:44.505159 kubelet[2617]: I1213 13:28:44.504745 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9" Dec 13 13:28:44.505536 containerd[1498]: time="2024-12-13T13:28:44.505218401Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:28:44.505536 containerd[1498]: time="2024-12-13T13:28:44.505400324Z" level=info msg="Ensure that sandbox 775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9 in task-service has been cleanup successfully" Dec 13 13:28:44.506448 containerd[1498]: time="2024-12-13T13:28:44.505664672Z" level=info msg="TearDown network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" successfully" Dec 13 13:28:44.506448 containerd[1498]: time="2024-12-13T13:28:44.505685843Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" returns successfully" Dec 13 13:28:44.506448 containerd[1498]: time="2024-12-13T13:28:44.505893825Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:28:44.506448 containerd[1498]: time="2024-12-13T13:28:44.505972583Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:28:44.506448 containerd[1498]: time="2024-12-13T13:28:44.505983825Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:28:44.506624 kubelet[2617]: I1213 13:28:44.506145 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0" Dec 13 13:28:44.506624 kubelet[2617]: E1213 13:28:44.506145 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:44.506680 containerd[1498]: time="2024-12-13T13:28:44.506444874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:2,}" Dec 13 13:28:44.506680 containerd[1498]: time="2024-12-13T13:28:44.506556013Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:28:44.506768 containerd[1498]: time="2024-12-13T13:28:44.506710064Z" level=info msg="Ensure that sandbox 985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0 in task-service has been cleanup successfully" Dec 13 13:28:44.507415 containerd[1498]: time="2024-12-13T13:28:44.507347476Z" level=info msg="TearDown network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" successfully" Dec 13 13:28:44.507415 containerd[1498]: time="2024-12-13T13:28:44.507375308Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" returns successfully" Dec 13 13:28:44.507603 kubelet[2617]: I1213 13:28:44.507585 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf" Dec 13 13:28:44.507712 containerd[1498]: time="2024-12-13T13:28:44.507691364Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:28:44.507808 containerd[1498]: time="2024-12-13T13:28:44.507789669Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:28:44.507856 containerd[1498]: time="2024-12-13T13:28:44.507808564Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:28:44.508016 containerd[1498]: time="2024-12-13T13:28:44.507997962Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:28:44.508167 containerd[1498]: time="2024-12-13T13:28:44.508150459Z" level=info msg="Ensure that sandbox 7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf in task-service has been cleanup successfully" Dec 13 13:28:44.508323 containerd[1498]: time="2024-12-13T13:28:44.508289231Z" level=info msg="TearDown network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" successfully" Dec 13 13:28:44.508383 containerd[1498]: time="2024-12-13T13:28:44.508322303Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" returns successfully" Dec 13 13:28:44.508461 kubelet[2617]: E1213 13:28:44.508443 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:44.508623 containerd[1498]: time="2024-12-13T13:28:44.508600358Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:28:44.508623 containerd[1498]: time="2024-12-13T13:28:44.508616438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:2,}" Dec 13 13:28:44.508704 containerd[1498]: time="2024-12-13T13:28:44.508686881Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:28:44.508729 containerd[1498]: time="2024-12-13T13:28:44.508700106Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:28:44.509116 containerd[1498]: time="2024-12-13T13:28:44.509001394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:28:44.509401 kubelet[2617]: I1213 13:28:44.509385 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015" Dec 13 13:28:44.510202 containerd[1498]: time="2024-12-13T13:28:44.509913483Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:28:44.510202 containerd[1498]: time="2024-12-13T13:28:44.510088212Z" level=info msg="Ensure that sandbox e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015 in task-service has been cleanup successfully" Dec 13 13:28:44.510345 containerd[1498]: time="2024-12-13T13:28:44.510325720Z" level=info msg="TearDown network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" successfully" Dec 13 13:28:44.510429 containerd[1498]: time="2024-12-13T13:28:44.510409578Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" returns successfully" Dec 13 13:28:44.510708 containerd[1498]: time="2024-12-13T13:28:44.510682102Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:28:44.511412 containerd[1498]: time="2024-12-13T13:28:44.510764047Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:28:44.511412 containerd[1498]: time="2024-12-13T13:28:44.510773936Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:28:44.511412 containerd[1498]: time="2024-12-13T13:28:44.511221609Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:28:44.511412 containerd[1498]: time="2024-12-13T13:28:44.511390298Z" level=info msg="Ensure that sandbox f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3 in task-service has been cleanup successfully" Dec 13 13:28:44.511712 kubelet[2617]: I1213 13:28:44.510881 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3" Dec 13 13:28:44.511751 containerd[1498]: time="2024-12-13T13:28:44.511542514Z" level=info msg="TearDown network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" successfully" Dec 13 13:28:44.511751 containerd[1498]: time="2024-12-13T13:28:44.511554196Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" returns successfully" Dec 13 13:28:44.512338 containerd[1498]: time="2024-12-13T13:28:44.512042948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:2,}" Dec 13 13:28:44.512338 containerd[1498]: time="2024-12-13T13:28:44.512169747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:1,}" Dec 13 13:28:44.513009 kubelet[2617]: I1213 13:28:44.512988 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e" Dec 13 13:28:44.513378 containerd[1498]: time="2024-12-13T13:28:44.513342529Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:28:44.513808 containerd[1498]: time="2024-12-13T13:28:44.513503051Z" level=info msg="Ensure that sandbox 9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e in task-service has been cleanup successfully" Dec 13 13:28:44.513808 containerd[1498]: time="2024-12-13T13:28:44.513682118Z" level=info msg="TearDown network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" successfully" Dec 13 13:28:44.513808 containerd[1498]: time="2024-12-13T13:28:44.513699481Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" returns successfully" Dec 13 13:28:44.514526 containerd[1498]: time="2024-12-13T13:28:44.514491364Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:28:44.514599 containerd[1498]: time="2024-12-13T13:28:44.514579390Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:28:44.514599 containerd[1498]: time="2024-12-13T13:28:44.514593066Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:28:44.515086 containerd[1498]: time="2024-12-13T13:28:44.515044036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:28:44.628492 containerd[1498]: time="2024-12-13T13:28:44.628434135Z" level=error msg="Failed to destroy network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.629732 containerd[1498]: time="2024-12-13T13:28:44.629660888Z" level=error msg="encountered an error cleaning up failed sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.630004 containerd[1498]: time="2024-12-13T13:28:44.629898887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.630697 kubelet[2617]: E1213 13:28:44.630278 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.630697 kubelet[2617]: E1213 13:28:44.630342 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:44.630697 kubelet[2617]: E1213 13:28:44.630381 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:44.630887 kubelet[2617]: E1213 13:28:44.630429 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" podUID="27f04941-3371-4139-bdc8-8fa4b3ff5199" Dec 13 13:28:44.657941 containerd[1498]: time="2024-12-13T13:28:44.657871856Z" level=error msg="Failed to destroy network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.659157 containerd[1498]: time="2024-12-13T13:28:44.659120971Z" level=error msg="encountered an error cleaning up failed sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.659227 containerd[1498]: time="2024-12-13T13:28:44.659189200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.659606 kubelet[2617]: E1213 13:28:44.659481 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.659606 kubelet[2617]: E1213 13:28:44.659550 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:44.659606 kubelet[2617]: E1213 13:28:44.659574 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:44.659957 kubelet[2617]: E1213 13:28:44.659774 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:44.660484 containerd[1498]: time="2024-12-13T13:28:44.660455717Z" level=error msg="Failed to destroy network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.661183 containerd[1498]: time="2024-12-13T13:28:44.660901477Z" level=error msg="encountered an error cleaning up failed sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.661183 containerd[1498]: time="2024-12-13T13:28:44.660946232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.661271 kubelet[2617]: E1213 13:28:44.661067 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.661271 kubelet[2617]: E1213 13:28:44.661097 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:44.661271 kubelet[2617]: E1213 13:28:44.661116 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:44.661386 kubelet[2617]: E1213 13:28:44.661151 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69bzb" podUID="a034a49a-558f-4b5e-a56a-f248911e85ff" Dec 13 13:28:44.665256 containerd[1498]: time="2024-12-13T13:28:44.665078433Z" level=error msg="Failed to destroy network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.665498 containerd[1498]: time="2024-12-13T13:28:44.665468778Z" level=error msg="encountered an error cleaning up failed sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.665557 containerd[1498]: time="2024-12-13T13:28:44.665520727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.665718 kubelet[2617]: E1213 13:28:44.665689 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.665859 kubelet[2617]: E1213 13:28:44.665794 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:44.665859 kubelet[2617]: E1213 13:28:44.665818 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:44.666043 kubelet[2617]: E1213 13:28:44.665992 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" podUID="4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e" Dec 13 13:28:44.666761 containerd[1498]: time="2024-12-13T13:28:44.666731099Z" level=error msg="Failed to destroy network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.667113 containerd[1498]: time="2024-12-13T13:28:44.667089634Z" level=error msg="encountered an error cleaning up failed sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.667174 containerd[1498]: time="2024-12-13T13:28:44.667126824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.667256 kubelet[2617]: E1213 13:28:44.667221 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.667311 kubelet[2617]: E1213 13:28:44.667255 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:44.667311 kubelet[2617]: E1213 13:28:44.667269 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:44.667311 kubelet[2617]: E1213 13:28:44.667294 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" podUID="ecc6ca41-b371-43b9-9b23-fe21358ee632" Dec 13 13:28:44.671792 containerd[1498]: time="2024-12-13T13:28:44.671762434Z" level=error msg="Failed to destroy network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.672091 containerd[1498]: time="2024-12-13T13:28:44.672062320Z" level=error msg="encountered an error cleaning up failed sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.672130 containerd[1498]: time="2024-12-13T13:28:44.672100080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.672236 kubelet[2617]: E1213 13:28:44.672208 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.672273 kubelet[2617]: E1213 13:28:44.672240 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:44.672273 kubelet[2617]: E1213 13:28:44.672255 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:44.672342 kubelet[2617]: E1213 13:28:44.672282 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hx7ds" podUID="cf27bd25-cb26-447c-8c84-23dc77a1d6bc" Dec 13 13:28:44.739050 systemd[1]: run-netns-cni\x2db74adf41\x2d66f3\x2db022\x2d67dc\x2ddb4af8c8f97a.mount: Deactivated successfully. Dec 13 13:28:44.739158 systemd[1]: run-netns-cni\x2dbdc47e7e\x2d56ea\x2d5983\x2da9d9\x2d602faa3d81d7.mount: Deactivated successfully. Dec 13 13:28:44.739232 systemd[1]: run-netns-cni\x2d1ea24c18\x2dbb03\x2d3646\x2de43f\x2da623ecf3e41f.mount: Deactivated successfully. Dec 13 13:28:44.739307 systemd[1]: run-netns-cni\x2dc8006c56\x2ddcf3\x2d8d02\x2da28d\x2de4e2516f7375.mount: Deactivated successfully. Dec 13 13:28:44.739395 systemd[1]: run-netns-cni\x2ddd03d0c6\x2d2aff\x2d0946\x2da3e8\x2dd45b64a5732d.mount: Deactivated successfully. Dec 13 13:28:45.515351 kubelet[2617]: I1213 13:28:45.515300 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e" Dec 13 13:28:45.516071 containerd[1498]: time="2024-12-13T13:28:45.516039016Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" Dec 13 13:28:45.516359 containerd[1498]: time="2024-12-13T13:28:45.516237529Z" level=info msg="Ensure that sandbox f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e in task-service has been cleanup successfully" Dec 13 13:28:45.518635 systemd[1]: run-netns-cni\x2d613df6e9\x2d2da6\x2d242f\x2d05b7\x2da1e63d9fe0dd.mount: Deactivated successfully. Dec 13 13:28:45.521021 containerd[1498]: time="2024-12-13T13:28:45.520975259Z" level=info msg="TearDown network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" successfully" Dec 13 13:28:45.521021 containerd[1498]: time="2024-12-13T13:28:45.521011107Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" returns successfully" Dec 13 13:28:45.521856 containerd[1498]: time="2024-12-13T13:28:45.521816264Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:28:45.521950 containerd[1498]: time="2024-12-13T13:28:45.521931802Z" level=info msg="TearDown network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" successfully" Dec 13 13:28:45.521950 containerd[1498]: time="2024-12-13T13:28:45.521946399Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" returns successfully" Dec 13 13:28:45.522423 containerd[1498]: time="2024-12-13T13:28:45.522388162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:2,}" Dec 13 13:28:45.523512 containerd[1498]: time="2024-12-13T13:28:45.523238375Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" Dec 13 13:28:45.523512 containerd[1498]: time="2024-12-13T13:28:45.523428262Z" level=info msg="Ensure that sandbox 93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39 in task-service has been cleanup successfully" Dec 13 13:28:45.524578 kubelet[2617]: I1213 13:28:45.522821 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39" Dec 13 13:28:45.525380 kubelet[2617]: I1213 13:28:45.525264 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7" Dec 13 13:28:45.525513 systemd[1]: run-netns-cni\x2d93c8497c\x2d7c43\x2d9952\x2d45bf\x2d6fa1626b6671.mount: Deactivated successfully. Dec 13 13:28:45.526751 containerd[1498]: time="2024-12-13T13:28:45.526708244Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" Dec 13 13:28:45.526924 containerd[1498]: time="2024-12-13T13:28:45.526907280Z" level=info msg="Ensure that sandbox 3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7 in task-service has been cleanup successfully" Dec 13 13:28:45.527633 containerd[1498]: time="2024-12-13T13:28:45.527595446Z" level=info msg="TearDown network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" successfully" Dec 13 13:28:45.527633 containerd[1498]: time="2024-12-13T13:28:45.527622477Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" returns successfully" Dec 13 13:28:45.528083 containerd[1498]: time="2024-12-13T13:28:45.528048680Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:28:45.528150 containerd[1498]: time="2024-12-13T13:28:45.528138069Z" level=info msg="TearDown network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" successfully" Dec 13 13:28:45.528179 containerd[1498]: time="2024-12-13T13:28:45.528151374Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" returns successfully" Dec 13 13:28:45.528691 containerd[1498]: time="2024-12-13T13:28:45.528584541Z" level=info msg="TearDown network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" successfully" Dec 13 13:28:45.528691 containerd[1498]: time="2024-12-13T13:28:45.528607373Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" returns successfully" Dec 13 13:28:45.528780 containerd[1498]: time="2024-12-13T13:28:45.528721619Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:28:45.528947 containerd[1498]: time="2024-12-13T13:28:45.528812610Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:28:45.528947 containerd[1498]: time="2024-12-13T13:28:45.528846063Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:28:45.529151 kubelet[2617]: E1213 13:28:45.529021 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:45.532845 containerd[1498]: time="2024-12-13T13:28:45.529401089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:3,}" Dec 13 13:28:45.532845 containerd[1498]: time="2024-12-13T13:28:45.529627496Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:28:45.532845 containerd[1498]: time="2024-12-13T13:28:45.529704381Z" level=info msg="TearDown network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" successfully" Dec 13 13:28:45.532845 containerd[1498]: time="2024-12-13T13:28:45.529713969Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" returns successfully" Dec 13 13:28:45.531111 systemd[1]: run-netns-cni\x2d82060c5d\x2deb8e\x2dc36a\x2d76e3\x2df27ba028962c.mount: Deactivated successfully. Dec 13 13:28:45.533245 containerd[1498]: time="2024-12-13T13:28:45.533222752Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:28:45.533316 containerd[1498]: time="2024-12-13T13:28:45.533298626Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:28:45.533316 containerd[1498]: time="2024-12-13T13:28:45.533312752Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:28:45.533660 kubelet[2617]: I1213 13:28:45.533640 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e" Dec 13 13:28:45.533879 kubelet[2617]: E1213 13:28:45.533719 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:45.534022 containerd[1498]: time="2024-12-13T13:28:45.533989187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:3,}" Dec 13 13:28:45.534714 containerd[1498]: time="2024-12-13T13:28:45.534692392Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" Dec 13 13:28:45.535720 containerd[1498]: time="2024-12-13T13:28:45.535043444Z" level=info msg="Ensure that sandbox 12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e in task-service has been cleanup successfully" Dec 13 13:28:45.535932 containerd[1498]: time="2024-12-13T13:28:45.535905438Z" level=info msg="TearDown network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" successfully" Dec 13 13:28:45.536013 containerd[1498]: time="2024-12-13T13:28:45.535980490Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" returns successfully" Dec 13 13:28:45.537652 containerd[1498]: time="2024-12-13T13:28:45.537611144Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:28:45.537735 containerd[1498]: time="2024-12-13T13:28:45.537711553Z" level=info msg="TearDown network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" successfully" Dec 13 13:28:45.537735 containerd[1498]: time="2024-12-13T13:28:45.537726962Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" returns successfully" Dec 13 13:28:45.537894 systemd[1]: run-netns-cni\x2d8e6177d5\x2d66d7\x2dae11\x2dd5a1\x2dbb8374bd39ad.mount: Deactivated successfully. Dec 13 13:28:45.539991 kubelet[2617]: I1213 13:28:45.539972 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765" Dec 13 13:28:45.540325 containerd[1498]: time="2024-12-13T13:28:45.540165708Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:28:45.540325 containerd[1498]: time="2024-12-13T13:28:45.540264344Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:28:45.540325 containerd[1498]: time="2024-12-13T13:28:45.540276497Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:28:45.540713 containerd[1498]: time="2024-12-13T13:28:45.540657034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:28:45.540871 containerd[1498]: time="2024-12-13T13:28:45.540845409Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" Dec 13 13:28:45.541062 containerd[1498]: time="2024-12-13T13:28:45.541039966Z" level=info msg="Ensure that sandbox 03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765 in task-service has been cleanup successfully" Dec 13 13:28:45.541532 containerd[1498]: time="2024-12-13T13:28:45.541501305Z" level=info msg="TearDown network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" successfully" Dec 13 13:28:45.541532 containerd[1498]: time="2024-12-13T13:28:45.541519690Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" returns successfully" Dec 13 13:28:45.541725 containerd[1498]: time="2024-12-13T13:28:45.541705410Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:28:45.541813 containerd[1498]: time="2024-12-13T13:28:45.541783667Z" level=info msg="TearDown network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" successfully" Dec 13 13:28:45.541813 containerd[1498]: time="2024-12-13T13:28:45.541798506Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" returns successfully" Dec 13 13:28:45.542001 containerd[1498]: time="2024-12-13T13:28:45.541979246Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:28:45.542218 containerd[1498]: time="2024-12-13T13:28:45.542122356Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:28:45.542218 containerd[1498]: time="2024-12-13T13:28:45.542139238Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:28:45.542521 containerd[1498]: time="2024-12-13T13:28:45.542494978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:28:45.676251 kubelet[2617]: I1213 13:28:45.676209 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa" Dec 13 13:28:45.677321 containerd[1498]: time="2024-12-13T13:28:45.676967003Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" Dec 13 13:28:45.677321 containerd[1498]: time="2024-12-13T13:28:45.677166349Z" level=info msg="Ensure that sandbox a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa in task-service has been cleanup successfully" Dec 13 13:28:45.677592 containerd[1498]: time="2024-12-13T13:28:45.677569338Z" level=info msg="TearDown network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" successfully" Dec 13 13:28:45.677709 containerd[1498]: time="2024-12-13T13:28:45.677668425Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" returns successfully" Dec 13 13:28:45.678027 containerd[1498]: time="2024-12-13T13:28:45.677994059Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:28:45.678246 containerd[1498]: time="2024-12-13T13:28:45.678223982Z" level=info msg="TearDown network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" successfully" Dec 13 13:28:45.678347 containerd[1498]: time="2024-12-13T13:28:45.678246365Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" returns successfully" Dec 13 13:28:45.679567 containerd[1498]: time="2024-12-13T13:28:45.679546294Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:28:45.679912 containerd[1498]: time="2024-12-13T13:28:45.679861979Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:28:45.679912 containerd[1498]: time="2024-12-13T13:28:45.679879362Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:28:45.680568 containerd[1498]: time="2024-12-13T13:28:45.680527343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:3,}" Dec 13 13:28:45.746527 systemd[1]: run-netns-cni\x2d7fdd8775\x2d19ae\x2d1f94\x2d4f3b\x2d62b009f59dff.mount: Deactivated successfully. Dec 13 13:28:45.746958 systemd[1]: run-netns-cni\x2db511963d\x2dcec3\x2dc61e\x2d5e85\x2d9c6417205a2d.mount: Deactivated successfully. Dec 13 13:28:45.763056 containerd[1498]: time="2024-12-13T13:28:45.762961200Z" level=error msg="Failed to destroy network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.763651 containerd[1498]: time="2024-12-13T13:28:45.763560840Z" level=error msg="encountered an error cleaning up failed sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.763651 containerd[1498]: time="2024-12-13T13:28:45.763613830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.763986 kubelet[2617]: E1213 13:28:45.763951 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.765208 kubelet[2617]: E1213 13:28:45.764094 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:45.765208 kubelet[2617]: E1213 13:28:45.764117 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:45.765208 kubelet[2617]: E1213 13:28:45.764156 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" podUID="27f04941-3371-4139-bdc8-8fa4b3ff5199" Dec 13 13:28:45.766621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f-shm.mount: Deactivated successfully. Dec 13 13:28:45.792451 containerd[1498]: time="2024-12-13T13:28:45.792306668Z" level=error msg="Failed to destroy network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.792884 containerd[1498]: time="2024-12-13T13:28:45.792787965Z" level=error msg="encountered an error cleaning up failed sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.793545 containerd[1498]: time="2024-12-13T13:28:45.793517400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.794713 kubelet[2617]: E1213 13:28:45.793854 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.794713 kubelet[2617]: E1213 13:28:45.793951 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:45.794713 kubelet[2617]: E1213 13:28:45.793980 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:45.794842 kubelet[2617]: E1213 13:28:45.794017 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:45.795562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08-shm.mount: Deactivated successfully. Dec 13 13:28:45.796679 containerd[1498]: time="2024-12-13T13:28:45.796634355Z" level=error msg="Failed to destroy network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.799717 containerd[1498]: time="2024-12-13T13:28:45.798102232Z" level=error msg="encountered an error cleaning up failed sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.799717 containerd[1498]: time="2024-12-13T13:28:45.798171452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.799395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0-shm.mount: Deactivated successfully. Dec 13 13:28:45.800145 kubelet[2617]: E1213 13:28:45.798336 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.800145 kubelet[2617]: E1213 13:28:45.798379 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:45.800145 kubelet[2617]: E1213 13:28:45.798396 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:45.800283 kubelet[2617]: E1213 13:28:45.798437 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69bzb" podUID="a034a49a-558f-4b5e-a56a-f248911e85ff" Dec 13 13:28:45.804961 containerd[1498]: time="2024-12-13T13:28:45.802939719Z" level=error msg="Failed to destroy network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.805742 containerd[1498]: time="2024-12-13T13:28:45.805618979Z" level=error msg="encountered an error cleaning up failed sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.805742 containerd[1498]: time="2024-12-13T13:28:45.805669664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.805885 kubelet[2617]: E1213 13:28:45.805843 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.805925 kubelet[2617]: E1213 13:28:45.805901 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:45.805925 kubelet[2617]: E1213 13:28:45.805918 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:45.805982 kubelet[2617]: E1213 13:28:45.805956 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" podUID="4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e" Dec 13 13:28:45.806081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482-shm.mount: Deactivated successfully. Dec 13 13:28:45.824866 containerd[1498]: time="2024-12-13T13:28:45.824801212Z" level=error msg="Failed to destroy network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.825294 containerd[1498]: time="2024-12-13T13:28:45.825265247Z" level=error msg="encountered an error cleaning up failed sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.825352 containerd[1498]: time="2024-12-13T13:28:45.825323828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.825574 kubelet[2617]: E1213 13:28:45.825539 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.825645 kubelet[2617]: E1213 13:28:45.825611 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:45.825645 kubelet[2617]: E1213 13:28:45.825630 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:45.825715 kubelet[2617]: E1213 13:28:45.825671 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" podUID="ecc6ca41-b371-43b9-9b23-fe21358ee632" Dec 13 13:28:45.826402 containerd[1498]: time="2024-12-13T13:28:45.826350522Z" level=error msg="Failed to destroy network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.829435 containerd[1498]: time="2024-12-13T13:28:45.829389060Z" level=error msg="encountered an error cleaning up failed sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.829555 containerd[1498]: time="2024-12-13T13:28:45.829447239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.829720 kubelet[2617]: E1213 13:28:45.829670 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.829776 kubelet[2617]: E1213 13:28:45.829746 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:45.829776 kubelet[2617]: E1213 13:28:45.829768 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:45.829911 kubelet[2617]: E1213 13:28:45.829820 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hx7ds" podUID="cf27bd25-cb26-447c-8c84-23dc77a1d6bc" Dec 13 13:28:46.680152 kubelet[2617]: I1213 13:28:46.680118 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482" Dec 13 13:28:46.681082 containerd[1498]: time="2024-12-13T13:28:46.680886920Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\"" Dec 13 13:28:46.681082 containerd[1498]: time="2024-12-13T13:28:46.681060136Z" level=info msg="Ensure that sandbox 05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482 in task-service has been cleanup successfully" Dec 13 13:28:46.681410 containerd[1498]: time="2024-12-13T13:28:46.681214677Z" level=info msg="TearDown network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" successfully" Dec 13 13:28:46.681410 containerd[1498]: time="2024-12-13T13:28:46.681226149Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" returns successfully" Dec 13 13:28:46.681656 containerd[1498]: time="2024-12-13T13:28:46.681637554Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" Dec 13 13:28:46.681980 containerd[1498]: time="2024-12-13T13:28:46.681956144Z" level=info msg="TearDown network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" successfully" Dec 13 13:28:46.681980 containerd[1498]: time="2024-12-13T13:28:46.681975460Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" returns successfully" Dec 13 13:28:46.682496 containerd[1498]: time="2024-12-13T13:28:46.682474410Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:28:46.682588 containerd[1498]: time="2024-12-13T13:28:46.682571012Z" level=info msg="TearDown network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" successfully" Dec 13 13:28:46.682621 containerd[1498]: time="2024-12-13T13:28:46.682587043Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" returns successfully" Dec 13 13:28:46.683037 containerd[1498]: time="2024-12-13T13:28:46.682952160Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:28:46.683152 kubelet[2617]: I1213 13:28:46.683124 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad" Dec 13 13:28:46.683711 containerd[1498]: time="2024-12-13T13:28:46.683688659Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\"" Dec 13 13:28:46.684466 containerd[1498]: time="2024-12-13T13:28:46.684422932Z" level=info msg="Ensure that sandbox 9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad in task-service has been cleanup successfully" Dec 13 13:28:46.685050 containerd[1498]: time="2024-12-13T13:28:46.685016300Z" level=info msg="TearDown network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" successfully" Dec 13 13:28:46.685050 containerd[1498]: time="2024-12-13T13:28:46.685038662Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" returns successfully" Dec 13 13:28:46.685576 containerd[1498]: time="2024-12-13T13:28:46.685541990Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" Dec 13 13:28:46.685989 containerd[1498]: time="2024-12-13T13:28:46.685969596Z" level=info msg="TearDown network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" successfully" Dec 13 13:28:46.685989 containerd[1498]: time="2024-12-13T13:28:46.685984634Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" returns successfully" Dec 13 13:28:46.712284 containerd[1498]: time="2024-12-13T13:28:46.711286117Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:28:46.712284 containerd[1498]: time="2024-12-13T13:28:46.711791860Z" level=info msg="TearDown network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" successfully" Dec 13 13:28:46.712284 containerd[1498]: time="2024-12-13T13:28:46.711812429Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" returns successfully" Dec 13 13:28:46.712761 containerd[1498]: time="2024-12-13T13:28:46.712707335Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:28:46.712999 containerd[1498]: time="2024-12-13T13:28:46.712952878Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:28:46.712999 containerd[1498]: time="2024-12-13T13:28:46.712973777Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:28:46.713569 containerd[1498]: time="2024-12-13T13:28:46.713536758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:4,}" Dec 13 13:28:46.719231 containerd[1498]: time="2024-12-13T13:28:46.719197544Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:28:46.719719 containerd[1498]: time="2024-12-13T13:28:46.719226408Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:28:46.720499 containerd[1498]: time="2024-12-13T13:28:46.720471654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:28:46.723738 kubelet[2617]: I1213 13:28:46.723692 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08" Dec 13 13:28:46.724660 containerd[1498]: time="2024-12-13T13:28:46.724636121Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\"" Dec 13 13:28:46.724877 containerd[1498]: time="2024-12-13T13:28:46.724852980Z" level=info msg="Ensure that sandbox 791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08 in task-service has been cleanup successfully" Dec 13 13:28:46.725859 containerd[1498]: time="2024-12-13T13:28:46.725707740Z" level=info msg="TearDown network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" successfully" Dec 13 13:28:46.725859 containerd[1498]: time="2024-12-13T13:28:46.725727106Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" returns successfully" Dec 13 13:28:46.726233 containerd[1498]: time="2024-12-13T13:28:46.726175972Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" Dec 13 13:28:46.726502 kubelet[2617]: I1213 13:28:46.726393 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0" Dec 13 13:28:46.726803 containerd[1498]: time="2024-12-13T13:28:46.726773308Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\"" Dec 13 13:28:46.734800 kubelet[2617]: I1213 13:28:46.734769 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d" Dec 13 13:28:46.738630 systemd[1]: run-netns-cni\x2d38bd544e\x2d257f\x2d21c4\x2da507\x2d592263a3558d.mount: Deactivated successfully. Dec 13 13:28:46.738743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad-shm.mount: Deactivated successfully. Dec 13 13:28:46.738824 systemd[1]: run-netns-cni\x2dfff4b8f7\x2df7ff\x2dedc4\x2dcf51\x2d92959a7861df.mount: Deactivated successfully. Dec 13 13:28:46.739626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d-shm.mount: Deactivated successfully. Dec 13 13:28:46.739747 systemd[1]: run-netns-cni\x2d07d274a7\x2dccf0\x2d908b\x2db0c7\x2d71e5381bebff.mount: Deactivated successfully. Dec 13 13:28:46.750948 containerd[1498]: time="2024-12-13T13:28:46.726794688Z" level=info msg="TearDown network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" successfully" Dec 13 13:28:46.750948 containerd[1498]: time="2024-12-13T13:28:46.750933279Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" returns successfully" Dec 13 13:28:46.751104 containerd[1498]: time="2024-12-13T13:28:46.726932497Z" level=info msg="Ensure that sandbox 3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0 in task-service has been cleanup successfully" Dec 13 13:28:46.753285 kubelet[2617]: I1213 13:28:46.753259 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f" Dec 13 13:28:46.754035 containerd[1498]: time="2024-12-13T13:28:46.754007993Z" level=info msg="TearDown network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" successfully" Dec 13 13:28:46.754358 systemd[1]: run-netns-cni\x2d5661b365\x2d242b\x2dba99\x2d7c97\x2d1bd11ae9f7eb.mount: Deactivated successfully. Dec 13 13:28:46.755098 containerd[1498]: time="2024-12-13T13:28:46.754030926Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" returns successfully" Dec 13 13:28:46.755146 containerd[1498]: time="2024-12-13T13:28:46.735818631Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\"" Dec 13 13:28:46.755480 containerd[1498]: time="2024-12-13T13:28:46.755452965Z" level=info msg="Ensure that sandbox 9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d in task-service has been cleanup successfully" Dec 13 13:28:46.755724 containerd[1498]: time="2024-12-13T13:28:46.755698528Z" level=info msg="TearDown network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" successfully" Dec 13 13:28:46.758149 containerd[1498]: time="2024-12-13T13:28:46.755720519Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" returns successfully" Dec 13 13:28:46.758512 containerd[1498]: time="2024-12-13T13:28:46.758469148Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\"" Dec 13 13:28:46.758688 containerd[1498]: time="2024-12-13T13:28:46.758651692Z" level=info msg="Ensure that sandbox c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f in task-service has been cleanup successfully" Dec 13 13:28:46.759476 systemd[1]: run-netns-cni\x2d0ce22467\x2d1bee\x2dcc05\x2d51c7\x2dc7082b8bd1da.mount: Deactivated successfully. Dec 13 13:28:46.763219 systemd[1]: run-netns-cni\x2db9d7ba5a\x2dfcf2\x2d7e6e\x2dccf6\x2deb67c72505b7.mount: Deactivated successfully. Dec 13 13:28:46.764462 containerd[1498]: time="2024-12-13T13:28:46.764435429Z" level=info msg="TearDown network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" successfully" Dec 13 13:28:46.764668 containerd[1498]: time="2024-12-13T13:28:46.764583568Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" returns successfully" Dec 13 13:28:46.766582 containerd[1498]: time="2024-12-13T13:28:46.766534505Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.766778915Z" level=info msg="TearDown network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.766790707Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.766988089Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767058642Z" level=info msg="TearDown network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767067188Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767249001Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767324353Z" level=info msg="TearDown network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767333570Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767366162Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767452765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:3,}" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767464056Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767565808Z" level=info msg="TearDown network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767613768Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767640218Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767812954Z" level=info msg="TearDown network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.769561228Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.767845495Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.769698556Z" level=info msg="TearDown network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.769707834Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.769540258Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.769735826Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.770072842Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.770130940Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.770158042Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.770167861Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.770221902Z" level=info msg="TearDown network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.770946507Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.770230118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:4,}" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.771242606Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.771336052Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.771344858Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:28:46.772039 containerd[1498]: time="2024-12-13T13:28:46.771400242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:28:46.773156 kubelet[2617]: E1213 13:28:46.769880 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:46.773156 kubelet[2617]: E1213 13:28:46.771675 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:46.775078 containerd[1498]: time="2024-12-13T13:28:46.774799948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:4,}" Dec 13 13:28:46.817183 containerd[1498]: time="2024-12-13T13:28:46.817121650Z" level=error msg="Failed to destroy network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.817768 containerd[1498]: time="2024-12-13T13:28:46.817738412Z" level=error msg="encountered an error cleaning up failed sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.817813 containerd[1498]: time="2024-12-13T13:28:46.817795860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.818054 kubelet[2617]: E1213 13:28:46.818013 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.818113 kubelet[2617]: E1213 13:28:46.818073 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:46.818113 kubelet[2617]: E1213 13:28:46.818091 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:46.818199 kubelet[2617]: E1213 13:28:46.818128 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" podUID="ecc6ca41-b371-43b9-9b23-fe21358ee632" Dec 13 13:28:46.836639 containerd[1498]: time="2024-12-13T13:28:46.836583850Z" level=error msg="Failed to destroy network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.837174 containerd[1498]: time="2024-12-13T13:28:46.837146490Z" level=error msg="encountered an error cleaning up failed sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.837235 containerd[1498]: time="2024-12-13T13:28:46.837212474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.837476 kubelet[2617]: E1213 13:28:46.837440 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.837570 kubelet[2617]: E1213 13:28:46.837497 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:46.837570 kubelet[2617]: E1213 13:28:46.837516 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:46.837631 kubelet[2617]: E1213 13:28:46.837557 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" podUID="4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e" Dec 13 13:28:47.423540 containerd[1498]: time="2024-12-13T13:28:47.423401816Z" level=error msg="Failed to destroy network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.423790 containerd[1498]: time="2024-12-13T13:28:47.423774237Z" level=error msg="encountered an error cleaning up failed sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.423880 containerd[1498]: time="2024-12-13T13:28:47.423823209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.424089 kubelet[2617]: E1213 13:28:47.424048 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.424183 kubelet[2617]: E1213 13:28:47.424109 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:47.424183 kubelet[2617]: E1213 13:28:47.424131 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:47.424242 kubelet[2617]: E1213 13:28:47.424172 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:47.522882 containerd[1498]: time="2024-12-13T13:28:47.522725256Z" level=error msg="Failed to destroy network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.523134 containerd[1498]: time="2024-12-13T13:28:47.523107606Z" level=error msg="encountered an error cleaning up failed sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.523187 containerd[1498]: time="2024-12-13T13:28:47.523163742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.523637 kubelet[2617]: E1213 13:28:47.523396 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.523637 kubelet[2617]: E1213 13:28:47.523457 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:47.523637 kubelet[2617]: E1213 13:28:47.523475 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:47.523777 kubelet[2617]: E1213 13:28:47.523510 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69bzb" podUID="a034a49a-558f-4b5e-a56a-f248911e85ff" Dec 13 13:28:47.529306 containerd[1498]: time="2024-12-13T13:28:47.529124931Z" level=error msg="Failed to destroy network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.529713 containerd[1498]: time="2024-12-13T13:28:47.529678173Z" level=error msg="encountered an error cleaning up failed sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.529769 containerd[1498]: time="2024-12-13T13:28:47.529748325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.530345 kubelet[2617]: E1213 13:28:47.529965 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.530345 kubelet[2617]: E1213 13:28:47.530035 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:47.530345 kubelet[2617]: E1213 13:28:47.530054 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:47.530498 kubelet[2617]: E1213 13:28:47.530094 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" podUID="27f04941-3371-4139-bdc8-8fa4b3ff5199" Dec 13 13:28:47.549192 containerd[1498]: time="2024-12-13T13:28:47.549144691Z" level=error msg="Failed to destroy network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.549721 containerd[1498]: time="2024-12-13T13:28:47.549680280Z" level=error msg="encountered an error cleaning up failed sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.549865 containerd[1498]: time="2024-12-13T13:28:47.549739301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.549997 kubelet[2617]: E1213 13:28:47.549957 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.550088 kubelet[2617]: E1213 13:28:47.550018 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:47.550088 kubelet[2617]: E1213 13:28:47.550036 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:47.550151 kubelet[2617]: E1213 13:28:47.550076 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hx7ds" podUID="cf27bd25-cb26-447c-8c84-23dc77a1d6bc" Dec 13 13:28:47.739538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec-shm.mount: Deactivated successfully. Dec 13 13:28:47.768872 kubelet[2617]: I1213 13:28:47.768820 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b" Dec 13 13:28:47.769508 containerd[1498]: time="2024-12-13T13:28:47.769462192Z" level=info msg="StopPodSandbox for \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\"" Dec 13 13:28:47.770004 containerd[1498]: time="2024-12-13T13:28:47.769710360Z" level=info msg="Ensure that sandbox e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b in task-service has been cleanup successfully" Dec 13 13:28:47.770516 containerd[1498]: time="2024-12-13T13:28:47.770449813Z" level=info msg="TearDown network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" successfully" Dec 13 13:28:47.770516 containerd[1498]: time="2024-12-13T13:28:47.770470111Z" level=info msg="StopPodSandbox for \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" returns successfully" Dec 13 13:28:47.771221 containerd[1498]: time="2024-12-13T13:28:47.770899880Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\"" Dec 13 13:28:47.771221 containerd[1498]: time="2024-12-13T13:28:47.771004427Z" level=info msg="TearDown network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" successfully" Dec 13 13:28:47.771221 containerd[1498]: time="2024-12-13T13:28:47.771048179Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" returns successfully" Dec 13 13:28:47.771364 containerd[1498]: time="2024-12-13T13:28:47.771341081Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" Dec 13 13:28:47.771457 containerd[1498]: time="2024-12-13T13:28:47.771416072Z" level=info msg="TearDown network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" successfully" Dec 13 13:28:47.771457 containerd[1498]: time="2024-12-13T13:28:47.771454304Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" returns successfully" Dec 13 13:28:47.771825 containerd[1498]: time="2024-12-13T13:28:47.771801438Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:28:47.771927 containerd[1498]: time="2024-12-13T13:28:47.771889925Z" level=info msg="TearDown network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" successfully" Dec 13 13:28:47.771955 containerd[1498]: time="2024-12-13T13:28:47.771925271Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" returns successfully" Dec 13 13:28:47.772261 containerd[1498]: time="2024-12-13T13:28:47.772227721Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:28:47.772345 containerd[1498]: time="2024-12-13T13:28:47.772323531Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:28:47.772345 containerd[1498]: time="2024-12-13T13:28:47.772341606Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:28:47.772879 containerd[1498]: time="2024-12-13T13:28:47.772846978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:28:47.774149 kubelet[2617]: I1213 13:28:47.773618 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec" Dec 13 13:28:47.774470 containerd[1498]: time="2024-12-13T13:28:47.774234041Z" level=info msg="StopPodSandbox for \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\"" Dec 13 13:28:47.774723 containerd[1498]: time="2024-12-13T13:28:47.774691492Z" level=info msg="Ensure that sandbox 9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec in task-service has been cleanup successfully" Dec 13 13:28:47.775756 systemd[1]: run-netns-cni\x2dce0acf28\x2df957\x2df01b\x2db268\x2defd940811360.mount: Deactivated successfully. Dec 13 13:28:47.777521 containerd[1498]: time="2024-12-13T13:28:47.777413429Z" level=info msg="TearDown network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" successfully" Dec 13 13:28:47.777521 containerd[1498]: time="2024-12-13T13:28:47.777450058Z" level=info msg="StopPodSandbox for \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" returns successfully" Dec 13 13:28:47.778398 kubelet[2617]: I1213 13:28:47.778310 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80" Dec 13 13:28:47.779688 systemd[1]: run-netns-cni\x2d1409755a\x2d28a4\x2dcd7c\x2df0fa\x2df01190870164.mount: Deactivated successfully. Dec 13 13:28:47.780950 containerd[1498]: time="2024-12-13T13:28:47.780925575Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\"" Dec 13 13:28:47.781157 containerd[1498]: time="2024-12-13T13:28:47.781113709Z" level=info msg="TearDown network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" successfully" Dec 13 13:28:47.781157 containerd[1498]: time="2024-12-13T13:28:47.781127214Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" returns successfully" Dec 13 13:28:47.781215 containerd[1498]: time="2024-12-13T13:28:47.781159937Z" level=info msg="StopPodSandbox for \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\"" Dec 13 13:28:47.781364 containerd[1498]: time="2024-12-13T13:28:47.781338613Z" level=info msg="Ensure that sandbox 467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80 in task-service has been cleanup successfully" Dec 13 13:28:47.782355 containerd[1498]: time="2024-12-13T13:28:47.782290977Z" level=info msg="TearDown network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" successfully" Dec 13 13:28:47.782355 containerd[1498]: time="2024-12-13T13:28:47.782349156Z" level=info msg="StopPodSandbox for \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" returns successfully" Dec 13 13:28:47.782427 containerd[1498]: time="2024-12-13T13:28:47.782393931Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.782969184Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\"" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.783080424Z" level=info msg="TearDown network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" successfully" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.783092326Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" returns successfully" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.783233302Z" level=info msg="TearDown network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" successfully" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.783248300Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" returns successfully" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.784732365Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.784801125Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.784825371Z" level=info msg="TearDown network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" successfully" Dec 13 13:28:47.784875 containerd[1498]: time="2024-12-13T13:28:47.784856038Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" returns successfully" Dec 13 13:28:47.784231 systemd[1]: run-netns-cni\x2d982a7655\x2d422d\x2d4bdc\x2d74cf\x2d30b1399e7b31.mount: Deactivated successfully. Dec 13 13:28:47.785186 containerd[1498]: time="2024-12-13T13:28:47.784942011Z" level=info msg="TearDown network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" successfully" Dec 13 13:28:47.785186 containerd[1498]: time="2024-12-13T13:28:47.784953753Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" returns successfully" Dec 13 13:28:47.786081 containerd[1498]: time="2024-12-13T13:28:47.786031392Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:28:47.786404 containerd[1498]: time="2024-12-13T13:28:47.786379909Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:28:47.786471 containerd[1498]: time="2024-12-13T13:28:47.786457145Z" level=info msg="TearDown network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" successfully" Dec 13 13:28:47.786471 containerd[1498]: time="2024-12-13T13:28:47.786466672Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" returns successfully" Dec 13 13:28:47.786519 kubelet[2617]: I1213 13:28:47.786398 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007" Dec 13 13:28:47.786822 containerd[1498]: time="2024-12-13T13:28:47.786802815Z" level=info msg="StopPodSandbox for \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\"" Dec 13 13:28:47.787130 containerd[1498]: time="2024-12-13T13:28:47.787005537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:4,}" Dec 13 13:28:47.787130 containerd[1498]: time="2024-12-13T13:28:47.787018672Z" level=info msg="Ensure that sandbox 71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007 in task-service has been cleanup successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.787342282Z" level=info msg="TearDown network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.787359033Z" level=info msg="StopPodSandbox for \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" returns successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.787614745Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\"" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.787702531Z" level=info msg="TearDown network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.787711187Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" returns successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788003838Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788086694Z" level=info msg="TearDown network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788095230Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" returns successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788346373Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788426744Z" level=info msg="TearDown network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788435652Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" returns successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788702454Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788772626Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.788781412Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:28:47.789598 containerd[1498]: time="2024-12-13T13:28:47.789383827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:28:47.790136 systemd[1]: run-netns-cni\x2d1b2d9c31\x2d858c\x2db6ed\x2d0650\x2d6dbe3f00f4b8.mount: Deactivated successfully. Dec 13 13:28:47.790246 kubelet[2617]: I1213 13:28:47.790155 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0" Dec 13 13:28:47.790934 containerd[1498]: time="2024-12-13T13:28:47.790570993Z" level=info msg="StopPodSandbox for \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\"" Dec 13 13:28:47.790934 containerd[1498]: time="2024-12-13T13:28:47.790710847Z" level=info msg="Ensure that sandbox 42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0 in task-service has been cleanup successfully" Dec 13 13:28:47.791075 containerd[1498]: time="2024-12-13T13:28:47.791059112Z" level=info msg="TearDown network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" successfully" Dec 13 13:28:47.791139 containerd[1498]: time="2024-12-13T13:28:47.791118996Z" level=info msg="StopPodSandbox for \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" returns successfully" Dec 13 13:28:47.791495 containerd[1498]: time="2024-12-13T13:28:47.791460870Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\"" Dec 13 13:28:47.791652 containerd[1498]: time="2024-12-13T13:28:47.791627063Z" level=info msg="TearDown network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" successfully" Dec 13 13:28:47.791652 containerd[1498]: time="2024-12-13T13:28:47.791645988Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" returns successfully" Dec 13 13:28:47.791979 containerd[1498]: time="2024-12-13T13:28:47.791957666Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" Dec 13 13:28:47.792046 containerd[1498]: time="2024-12-13T13:28:47.792032516Z" level=info msg="TearDown network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" successfully" Dec 13 13:28:47.792082 containerd[1498]: time="2024-12-13T13:28:47.792043807Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" returns successfully" Dec 13 13:28:47.792306 containerd[1498]: time="2024-12-13T13:28:47.792289631Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:28:47.792390 containerd[1498]: time="2024-12-13T13:28:47.792361376Z" level=info msg="TearDown network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" successfully" Dec 13 13:28:47.792390 containerd[1498]: time="2024-12-13T13:28:47.792374190Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" returns successfully" Dec 13 13:28:47.792580 containerd[1498]: time="2024-12-13T13:28:47.792560561Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:28:47.792651 containerd[1498]: time="2024-12-13T13:28:47.792635352Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:28:47.792651 containerd[1498]: time="2024-12-13T13:28:47.792649559Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:28:47.792879 kubelet[2617]: I1213 13:28:47.792849 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76" Dec 13 13:28:47.793194 kubelet[2617]: E1213 13:28:47.793167 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:47.793381 containerd[1498]: time="2024-12-13T13:28:47.793359797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:5,}" Dec 13 13:28:47.793520 containerd[1498]: time="2024-12-13T13:28:47.793484862Z" level=info msg="StopPodSandbox for \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\"" Dec 13 13:28:47.793707 containerd[1498]: time="2024-12-13T13:28:47.793653119Z" level=info msg="Ensure that sandbox b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76 in task-service has been cleanup successfully" Dec 13 13:28:47.793910 containerd[1498]: time="2024-12-13T13:28:47.793890877Z" level=info msg="TearDown network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" successfully" Dec 13 13:28:47.793910 containerd[1498]: time="2024-12-13T13:28:47.793907859Z" level=info msg="StopPodSandbox for \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" returns successfully" Dec 13 13:28:47.794267 containerd[1498]: time="2024-12-13T13:28:47.794246507Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\"" Dec 13 13:28:47.794354 containerd[1498]: time="2024-12-13T13:28:47.794337347Z" level=info msg="TearDown network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" successfully" Dec 13 13:28:47.794354 containerd[1498]: time="2024-12-13T13:28:47.794351786Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" returns successfully" Dec 13 13:28:47.794679 containerd[1498]: time="2024-12-13T13:28:47.794641271Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" Dec 13 13:28:47.794755 containerd[1498]: time="2024-12-13T13:28:47.794735889Z" level=info msg="TearDown network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" successfully" Dec 13 13:28:47.794755 containerd[1498]: time="2024-12-13T13:28:47.794752089Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" returns successfully" Dec 13 13:28:47.795032 containerd[1498]: time="2024-12-13T13:28:47.794997602Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:28:47.795114 containerd[1498]: time="2024-12-13T13:28:47.795073304Z" level=info msg="TearDown network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" successfully" Dec 13 13:28:47.795114 containerd[1498]: time="2024-12-13T13:28:47.795082932Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" returns successfully" Dec 13 13:28:47.795477 containerd[1498]: time="2024-12-13T13:28:47.795437229Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:28:47.795547 containerd[1498]: time="2024-12-13T13:28:47.795525195Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:28:47.795577 containerd[1498]: time="2024-12-13T13:28:47.795547157Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:28:47.795715 kubelet[2617]: E1213 13:28:47.795695 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:47.796079 containerd[1498]: time="2024-12-13T13:28:47.796051477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:5,}" Dec 13 13:28:48.088961 containerd[1498]: time="2024-12-13T13:28:48.088807283Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:28:48.088961 containerd[1498]: time="2024-12-13T13:28:48.088868858Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:28:48.089606 containerd[1498]: time="2024-12-13T13:28:48.089584376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:5,}" Dec 13 13:28:48.739646 systemd[1]: run-netns-cni\x2d951cb435\x2d7ad1\x2d8bcb\x2d4e4c\x2d98fe35ce7a73.mount: Deactivated successfully. Dec 13 13:28:48.739747 systemd[1]: run-netns-cni\x2d4ff2a4e1\x2de8d9\x2db210\x2dc8ec\x2d16269b71c1be.mount: Deactivated successfully. Dec 13 13:28:48.870703 containerd[1498]: time="2024-12-13T13:28:48.870645673Z" level=error msg="Failed to destroy network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.880333 containerd[1498]: time="2024-12-13T13:28:48.880174327Z" level=error msg="Failed to destroy network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.880765 containerd[1498]: time="2024-12-13T13:28:48.880735214Z" level=error msg="encountered an error cleaning up failed sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.880891 containerd[1498]: time="2024-12-13T13:28:48.880869226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.882254 kubelet[2617]: E1213 13:28:48.881132 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.882254 kubelet[2617]: E1213 13:28:48.881196 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:48.882254 kubelet[2617]: E1213 13:28:48.881216 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hx7ds" Dec 13 13:28:48.882667 kubelet[2617]: E1213 13:28:48.881261 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hx7ds_kube-system(cf27bd25-cb26-447c-8c84-23dc77a1d6bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hx7ds" podUID="cf27bd25-cb26-447c-8c84-23dc77a1d6bc" Dec 13 13:28:48.885191 containerd[1498]: time="2024-12-13T13:28:48.885156840Z" level=error msg="encountered an error cleaning up failed sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.885341 containerd[1498]: time="2024-12-13T13:28:48.885321060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.885653 kubelet[2617]: E1213 13:28:48.885609 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.885697 kubelet[2617]: E1213 13:28:48.885677 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:48.885725 kubelet[2617]: E1213 13:28:48.885701 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" Dec 13 13:28:48.885775 kubelet[2617]: E1213 13:28:48.885742 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-5xdq6_calico-apiserver(27f04941-3371-4139-bdc8-8fa4b3ff5199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" podUID="27f04941-3371-4139-bdc8-8fa4b3ff5199" Dec 13 13:28:48.886009 containerd[1498]: time="2024-12-13T13:28:48.885989118Z" level=error msg="Failed to destroy network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.887372 containerd[1498]: time="2024-12-13T13:28:48.887331646Z" level=error msg="Failed to destroy network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.887917 containerd[1498]: time="2024-12-13T13:28:48.887605441Z" level=error msg="encountered an error cleaning up failed sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.888083 containerd[1498]: time="2024-12-13T13:28:48.888063444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.888212 containerd[1498]: time="2024-12-13T13:28:48.887886290Z" level=error msg="encountered an error cleaning up failed sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.888298 containerd[1498]: time="2024-12-13T13:28:48.888282116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.888609 kubelet[2617]: E1213 13:28:48.888459 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.888609 kubelet[2617]: E1213 13:28:48.888478 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.888609 kubelet[2617]: E1213 13:28:48.888514 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:48.888609 kubelet[2617]: E1213 13:28:48.888528 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:48.888737 kubelet[2617]: E1213 13:28:48.888535 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sqmg9" Dec 13 13:28:48.888737 kubelet[2617]: E1213 13:28:48.888544 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69bzb" Dec 13 13:28:48.889434 kubelet[2617]: E1213 13:28:48.889395 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-69bzb_kube-system(a034a49a-558f-4b5e-a56a-f248911e85ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69bzb" podUID="a034a49a-558f-4b5e-a56a-f248911e85ff" Dec 13 13:28:48.889545 kubelet[2617]: E1213 13:28:48.888580 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sqmg9_calico-system(169b145c-9dd2-4ef7-8f30-2acc264f69a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sqmg9" podUID="169b145c-9dd2-4ef7-8f30-2acc264f69a4" Dec 13 13:28:48.892870 containerd[1498]: time="2024-12-13T13:28:48.892228076Z" level=error msg="Failed to destroy network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.892870 containerd[1498]: time="2024-12-13T13:28:48.892768083Z" level=error msg="encountered an error cleaning up failed sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.892959 containerd[1498]: time="2024-12-13T13:28:48.892882459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.893085 kubelet[2617]: E1213 13:28:48.893057 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.893128 kubelet[2617]: E1213 13:28:48.893092 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:48.893128 kubelet[2617]: E1213 13:28:48.893108 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" Dec 13 13:28:48.893173 kubelet[2617]: E1213 13:28:48.893140 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6579dc67df-9c4m8_calico-system(ecc6ca41-b371-43b9-9b23-fe21358ee632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" podUID="ecc6ca41-b371-43b9-9b23-fe21358ee632" Dec 13 13:28:48.901969 containerd[1498]: time="2024-12-13T13:28:48.901915849Z" level=error msg="Failed to destroy network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.902417 containerd[1498]: time="2024-12-13T13:28:48.902320351Z" level=error msg="encountered an error cleaning up failed sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.902417 containerd[1498]: time="2024-12-13T13:28:48.902372079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.902598 kubelet[2617]: E1213 13:28:48.902547 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.902680 kubelet[2617]: E1213 13:28:48.902601 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:48.902680 kubelet[2617]: E1213 13:28:48.902622 2617 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" Dec 13 13:28:48.902680 kubelet[2617]: E1213 13:28:48.902655 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc76d6748-ck574_calico-apiserver(4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" podUID="4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e" Dec 13 13:28:48.987717 containerd[1498]: time="2024-12-13T13:28:48.987653426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:48.988423 containerd[1498]: time="2024-12-13T13:28:48.988355840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:28:48.989404 containerd[1498]: time="2024-12-13T13:28:48.989363567Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:48.991657 containerd[1498]: time="2024-12-13T13:28:48.991548451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:48.991968 containerd[1498]: time="2024-12-13T13:28:48.991915092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.489415661s" Dec 13 13:28:48.991968 containerd[1498]: time="2024-12-13T13:28:48.991948144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:28:49.002686 containerd[1498]: time="2024-12-13T13:28:49.002635179Z" level=info msg="CreateContainer within sandbox \"11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:28:49.021162 containerd[1498]: time="2024-12-13T13:28:49.021112255Z" level=info msg="CreateContainer within sandbox \"11bafe74ba6b099545aac045c8d61bfb9b91027076143bcd494983fdb736a4d2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3525532616f1a00de59a5007c978e76084ba6baaf7ce3978603d60ef0356dc06\"" Dec 13 13:28:49.021676 containerd[1498]: time="2024-12-13T13:28:49.021649647Z" level=info msg="StartContainer for \"3525532616f1a00de59a5007c978e76084ba6baaf7ce3978603d60ef0356dc06\"" Dec 13 13:28:49.098054 systemd[1]: Started cri-containerd-3525532616f1a00de59a5007c978e76084ba6baaf7ce3978603d60ef0356dc06.scope - libcontainer container 3525532616f1a00de59a5007c978e76084ba6baaf7ce3978603d60ef0356dc06. Dec 13 13:28:49.221324 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:28:49.221466 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:28:49.247243 containerd[1498]: time="2024-12-13T13:28:49.247106322Z" level=info msg="StartContainer for \"3525532616f1a00de59a5007c978e76084ba6baaf7ce3978603d60ef0356dc06\" returns successfully" Dec 13 13:28:49.741866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa-shm.mount: Deactivated successfully. Dec 13 13:28:49.742005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883-shm.mount: Deactivated successfully. Dec 13 13:28:49.742108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369-shm.mount: Deactivated successfully. Dec 13 13:28:49.742213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a-shm.mount: Deactivated successfully. Dec 13 13:28:49.742322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46-shm.mount: Deactivated successfully. Dec 13 13:28:49.742421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297-shm.mount: Deactivated successfully. Dec 13 13:28:49.742516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013081634.mount: Deactivated successfully. Dec 13 13:28:49.800804 kubelet[2617]: I1213 13:28:49.800778 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa" Dec 13 13:28:49.801346 containerd[1498]: time="2024-12-13T13:28:49.801303101Z" level=info msg="StopPodSandbox for \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\"" Dec 13 13:28:49.801531 containerd[1498]: time="2024-12-13T13:28:49.801513356Z" level=info msg="Ensure that sandbox a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa in task-service has been cleanup successfully" Dec 13 13:28:49.802194 containerd[1498]: time="2024-12-13T13:28:49.802144094Z" level=info msg="TearDown network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\" successfully" Dec 13 13:28:49.802474 containerd[1498]: time="2024-12-13T13:28:49.802453757Z" level=info msg="StopPodSandbox for \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\" returns successfully" Dec 13 13:28:49.804148 systemd[1]: run-netns-cni\x2d6fd375d7\x2d3574\x2d640d\x2d3391\x2d1b4eb67a5a27.mount: Deactivated successfully. Dec 13 13:28:49.809539 containerd[1498]: time="2024-12-13T13:28:49.809510833Z" level=info msg="StopPodSandbox for \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\"" Dec 13 13:28:49.809641 containerd[1498]: time="2024-12-13T13:28:49.809609399Z" level=info msg="TearDown network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" successfully" Dec 13 13:28:49.809641 containerd[1498]: time="2024-12-13T13:28:49.809627082Z" level=info msg="StopPodSandbox for \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" returns successfully" Dec 13 13:28:49.809899 containerd[1498]: time="2024-12-13T13:28:49.809878326Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\"" Dec 13 13:28:49.809980 containerd[1498]: time="2024-12-13T13:28:49.809955090Z" level=info msg="TearDown network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" successfully" Dec 13 13:28:49.809980 containerd[1498]: time="2024-12-13T13:28:49.809970068Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" returns successfully" Dec 13 13:28:49.810210 containerd[1498]: time="2024-12-13T13:28:49.810186856Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" Dec 13 13:28:49.810296 containerd[1498]: time="2024-12-13T13:28:49.810271266Z" level=info msg="TearDown network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" successfully" Dec 13 13:28:49.810296 containerd[1498]: time="2024-12-13T13:28:49.810285933Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" returns successfully" Dec 13 13:28:49.810358 kubelet[2617]: I1213 13:28:49.810282 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297" Dec 13 13:28:49.810743 containerd[1498]: time="2024-12-13T13:28:49.810724529Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:28:49.810813 containerd[1498]: time="2024-12-13T13:28:49.810790563Z" level=info msg="TearDown network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" successfully" Dec 13 13:28:49.810813 containerd[1498]: time="2024-12-13T13:28:49.810802635Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" returns successfully" Dec 13 13:28:49.810886 containerd[1498]: time="2024-12-13T13:28:49.810795993Z" level=info msg="StopPodSandbox for \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\"" Dec 13 13:28:49.811039 containerd[1498]: time="2024-12-13T13:28:49.811021929Z" level=info msg="Ensure that sandbox 979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297 in task-service has been cleanup successfully" Dec 13 13:28:49.811106 containerd[1498]: time="2024-12-13T13:28:49.811071922Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:28:49.811260 containerd[1498]: time="2024-12-13T13:28:49.811204031Z" level=info msg="TearDown network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\" successfully" Dec 13 13:28:49.811260 containerd[1498]: time="2024-12-13T13:28:49.811228537Z" level=info msg="StopPodSandbox for \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\" returns successfully" Dec 13 13:28:49.811552 containerd[1498]: time="2024-12-13T13:28:49.811531147Z" level=info msg="StopPodSandbox for \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\"" Dec 13 13:28:49.811645 containerd[1498]: time="2024-12-13T13:28:49.811629132Z" level=info msg="TearDown network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" successfully" Dec 13 13:28:49.811645 containerd[1498]: time="2024-12-13T13:28:49.811641505Z" level=info msg="StopPodSandbox for \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" returns successfully" Dec 13 13:28:49.811876 containerd[1498]: time="2024-12-13T13:28:49.811853344Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\"" Dec 13 13:28:49.812876 containerd[1498]: time="2024-12-13T13:28:49.811924387Z" level=info msg="TearDown network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" successfully" Dec 13 13:28:49.812876 containerd[1498]: time="2024-12-13T13:28:49.811938334Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" returns successfully" Dec 13 13:28:49.812876 containerd[1498]: time="2024-12-13T13:28:49.812146927Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" Dec 13 13:28:49.812876 containerd[1498]: time="2024-12-13T13:28:49.812234882Z" level=info msg="TearDown network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" successfully" Dec 13 13:28:49.812876 containerd[1498]: time="2024-12-13T13:28:49.812245291Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" returns successfully" Dec 13 13:28:49.813301 systemd[1]: run-netns-cni\x2dada04426\x2de862\x2d9f66\x2d3b0c\x2dd5e9991e0adc.mount: Deactivated successfully. Dec 13 13:28:49.813434 containerd[1498]: time="2024-12-13T13:28:49.813321488Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:28:49.813434 containerd[1498]: time="2024-12-13T13:28:49.813409433Z" level=info msg="TearDown network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" successfully" Dec 13 13:28:49.813434 containerd[1498]: time="2024-12-13T13:28:49.813419833Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" returns successfully" Dec 13 13:28:49.814038 containerd[1498]: time="2024-12-13T13:28:49.813987041Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:28:49.814038 containerd[1498]: time="2024-12-13T13:28:49.814007720Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:28:49.814275 kubelet[2617]: E1213 13:28:49.814250 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:49.816801 containerd[1498]: time="2024-12-13T13:28:49.815485632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:6,}" Dec 13 13:28:49.816801 containerd[1498]: time="2024-12-13T13:28:49.815820943Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:28:49.816801 containerd[1498]: time="2024-12-13T13:28:49.815968771Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:28:49.816801 containerd[1498]: time="2024-12-13T13:28:49.815979081Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:28:49.816801 containerd[1498]: time="2024-12-13T13:28:49.816446522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:28:49.817996 kubelet[2617]: I1213 13:28:49.817962 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883" Dec 13 13:28:49.818426 containerd[1498]: time="2024-12-13T13:28:49.818401451Z" level=info msg="StopPodSandbox for \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\"" Dec 13 13:28:49.820255 containerd[1498]: time="2024-12-13T13:28:49.818706145Z" level=info msg="Ensure that sandbox c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883 in task-service has been cleanup successfully" Dec 13 13:28:49.820594 containerd[1498]: time="2024-12-13T13:28:49.820477520Z" level=info msg="TearDown network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\" successfully" Dec 13 13:28:49.820594 containerd[1498]: time="2024-12-13T13:28:49.820516203Z" level=info msg="StopPodSandbox for \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\" returns successfully" Dec 13 13:28:49.821488 containerd[1498]: time="2024-12-13T13:28:49.821459028Z" level=info msg="StopPodSandbox for \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\"" Dec 13 13:28:49.821577 containerd[1498]: time="2024-12-13T13:28:49.821555630Z" level=info msg="TearDown network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" successfully" Dec 13 13:28:49.821648 containerd[1498]: time="2024-12-13T13:28:49.821631253Z" level=info msg="StopPodSandbox for \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" returns successfully" Dec 13 13:28:49.822150 containerd[1498]: time="2024-12-13T13:28:49.822118720Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\"" Dec 13 13:28:49.822237 containerd[1498]: time="2024-12-13T13:28:49.822205093Z" level=info msg="TearDown network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" successfully" Dec 13 13:28:49.822237 containerd[1498]: time="2024-12-13T13:28:49.822231452Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" returns successfully" Dec 13 13:28:49.823082 containerd[1498]: time="2024-12-13T13:28:49.823058960Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" Dec 13 13:28:49.823154 containerd[1498]: time="2024-12-13T13:28:49.823140464Z" level=info msg="TearDown network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" successfully" Dec 13 13:28:49.823178 containerd[1498]: time="2024-12-13T13:28:49.823153809Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" returns successfully" Dec 13 13:28:49.823430 containerd[1498]: time="2024-12-13T13:28:49.823410903Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:28:49.823500 containerd[1498]: time="2024-12-13T13:28:49.823483550Z" level=info msg="TearDown network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" successfully" Dec 13 13:28:49.823500 containerd[1498]: time="2024-12-13T13:28:49.823495613Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" returns successfully" Dec 13 13:28:49.823765 containerd[1498]: time="2024-12-13T13:28:49.823737889Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:28:49.823825 containerd[1498]: time="2024-12-13T13:28:49.823811898Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:28:49.823825 containerd[1498]: time="2024-12-13T13:28:49.823824481Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:28:49.824227 containerd[1498]: time="2024-12-13T13:28:49.824196913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:28:49.824374 systemd[1]: run-netns-cni\x2de9c137f3\x2d87ce\x2d375d\x2db1e2\x2d748daa48e26a.mount: Deactivated successfully. Dec 13 13:28:49.825362 kubelet[2617]: I1213 13:28:49.825336 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46" Dec 13 13:28:49.825922 containerd[1498]: time="2024-12-13T13:28:49.825811813Z" level=info msg="StopPodSandbox for \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\"" Dec 13 13:28:49.826081 containerd[1498]: time="2024-12-13T13:28:49.826063697Z" level=info msg="Ensure that sandbox 777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46 in task-service has been cleanup successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.826349726Z" level=info msg="TearDown network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\" successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.826403537Z" level=info msg="StopPodSandbox for \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\" returns successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.827116609Z" level=info msg="StopPodSandbox for \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\"" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.827189728Z" level=info msg="TearDown network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.827198624Z" level=info msg="StopPodSandbox for \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" returns successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.827359838Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\"" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.827849158Z" level=info msg="TearDown network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.827867113Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" returns successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.828080755Z" level=info msg="StopPodSandbox for \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\"" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.828246767Z" level=info msg="Ensure that sandbox 6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a in task-service has been cleanup successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.828469748Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.828546562Z" level=info msg="TearDown network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" successfully" Dec 13 13:28:49.828742 containerd[1498]: time="2024-12-13T13:28:49.828558274Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" returns successfully" Dec 13 13:28:49.829303 kubelet[2617]: I1213 13:28:49.827543 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a" Dec 13 13:28:49.829341 containerd[1498]: time="2024-12-13T13:28:49.828889768Z" level=info msg="TearDown network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\" successfully" Dec 13 13:28:49.829341 containerd[1498]: time="2024-12-13T13:28:49.828904827Z" level=info msg="StopPodSandbox for \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\" returns successfully" Dec 13 13:28:49.829341 containerd[1498]: time="2024-12-13T13:28:49.828955482Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:28:49.829341 containerd[1498]: time="2024-12-13T13:28:49.829104672Z" level=info msg="TearDown network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" successfully" Dec 13 13:28:49.829341 containerd[1498]: time="2024-12-13T13:28:49.829115322Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" returns successfully" Dec 13 13:28:49.829341 containerd[1498]: time="2024-12-13T13:28:49.829294820Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:28:49.829509 containerd[1498]: time="2024-12-13T13:28:49.829375222Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:28:49.829509 containerd[1498]: time="2024-12-13T13:28:49.829386513Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:28:49.829509 containerd[1498]: time="2024-12-13T13:28:49.829430015Z" level=info msg="StopPodSandbox for \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\"" Dec 13 13:28:49.829509 containerd[1498]: time="2024-12-13T13:28:49.829500478Z" level=info msg="TearDown network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" successfully" Dec 13 13:28:49.829349 systemd[1]: run-netns-cni\x2d9be1c354\x2d5aa6\x2d1f4d\x2d4e1b\x2da350d65cd016.mount: Deactivated successfully. Dec 13 13:28:49.829765 containerd[1498]: time="2024-12-13T13:28:49.829510346Z" level=info msg="StopPodSandbox for \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" returns successfully" Dec 13 13:28:49.829939 containerd[1498]: time="2024-12-13T13:28:49.829918866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:6,}" Dec 13 13:28:49.830015 containerd[1498]: time="2024-12-13T13:28:49.829998836Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\"" Dec 13 13:28:49.830097 containerd[1498]: time="2024-12-13T13:28:49.830069720Z" level=info msg="TearDown network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" successfully" Dec 13 13:28:49.830097 containerd[1498]: time="2024-12-13T13:28:49.830082714Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" returns successfully" Dec 13 13:28:49.830785 containerd[1498]: time="2024-12-13T13:28:49.830742276Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" Dec 13 13:28:49.830867 containerd[1498]: time="2024-12-13T13:28:49.830824290Z" level=info msg="TearDown network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" successfully" Dec 13 13:28:49.830867 containerd[1498]: time="2024-12-13T13:28:49.830856651Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" returns successfully" Dec 13 13:28:49.831144 containerd[1498]: time="2024-12-13T13:28:49.831125246Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:28:49.831250 containerd[1498]: time="2024-12-13T13:28:49.831232519Z" level=info msg="TearDown network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" successfully" Dec 13 13:28:49.831250 containerd[1498]: time="2024-12-13T13:28:49.831245964Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" returns successfully" Dec 13 13:28:49.831728 containerd[1498]: time="2024-12-13T13:28:49.831697724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:5,}" Dec 13 13:28:49.832470 kubelet[2617]: E1213 13:28:49.832362 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:49.835139 kubelet[2617]: I1213 13:28:49.835112 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369" Dec 13 13:28:49.835559 containerd[1498]: time="2024-12-13T13:28:49.835515122Z" level=info msg="StopPodSandbox for \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\"" Dec 13 13:28:49.835800 containerd[1498]: time="2024-12-13T13:28:49.835778958Z" level=info msg="Ensure that sandbox 67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369 in task-service has been cleanup successfully" Dec 13 13:28:49.835967 containerd[1498]: time="2024-12-13T13:28:49.835950481Z" level=info msg="TearDown network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\" successfully" Dec 13 13:28:49.835994 containerd[1498]: time="2024-12-13T13:28:49.835964988Z" level=info msg="StopPodSandbox for \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\" returns successfully" Dec 13 13:28:49.836354 containerd[1498]: time="2024-12-13T13:28:49.836324646Z" level=info msg="StopPodSandbox for \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\"" Dec 13 13:28:49.836491 containerd[1498]: time="2024-12-13T13:28:49.836450602Z" level=info msg="TearDown network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" successfully" Dec 13 13:28:49.836491 containerd[1498]: time="2024-12-13T13:28:49.836476692Z" level=info msg="StopPodSandbox for \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" returns successfully" Dec 13 13:28:49.836735 containerd[1498]: time="2024-12-13T13:28:49.836711524Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\"" Dec 13 13:28:49.836798 containerd[1498]: time="2024-12-13T13:28:49.836785193Z" level=info msg="TearDown network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" successfully" Dec 13 13:28:49.837178 containerd[1498]: time="2024-12-13T13:28:49.836796775Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" returns successfully" Dec 13 13:28:49.837178 containerd[1498]: time="2024-12-13T13:28:49.836994436Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" Dec 13 13:28:49.837178 containerd[1498]: time="2024-12-13T13:28:49.837061122Z" level=info msg="TearDown network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" successfully" Dec 13 13:28:49.837178 containerd[1498]: time="2024-12-13T13:28:49.837070360Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" returns successfully" Dec 13 13:28:49.837655 containerd[1498]: time="2024-12-13T13:28:49.837276648Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:28:49.837655 containerd[1498]: time="2024-12-13T13:28:49.837352561Z" level=info msg="TearDown network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" successfully" Dec 13 13:28:49.837655 containerd[1498]: time="2024-12-13T13:28:49.837363972Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" returns successfully" Dec 13 13:28:49.837655 containerd[1498]: time="2024-12-13T13:28:49.837612060Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:28:49.837755 containerd[1498]: time="2024-12-13T13:28:49.837703682Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:28:49.837755 containerd[1498]: time="2024-12-13T13:28:49.837716666Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:28:49.838105 kubelet[2617]: E1213 13:28:49.838083 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:49.838549 containerd[1498]: time="2024-12-13T13:28:49.838526321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:6,}" Dec 13 13:28:49.845463 kubelet[2617]: I1213 13:28:49.845367 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ddgwb" podStartSLOduration=1.642210725 podStartE2EDuration="16.845327815s" podCreationTimestamp="2024-12-13 13:28:33 +0000 UTC" firstStartedPulling="2024-12-13 13:28:33.790413053 +0000 UTC m=+12.656525774" lastFinishedPulling="2024-12-13 13:28:48.993530143 +0000 UTC m=+27.859642864" observedRunningTime="2024-12-13 13:28:49.845187281 +0000 UTC m=+28.711300002" watchObservedRunningTime="2024-12-13 13:28:49.845327815 +0000 UTC m=+28.711440536" Dec 13 13:28:50.696864 systemd-networkd[1424]: caliccb0ef97f57: Link UP Dec 13 13:28:50.698558 systemd-networkd[1424]: caliccb0ef97f57: Gained carrier Dec 13 13:28:50.743946 systemd[1]: run-netns-cni\x2dc43bac6a\x2dc950\x2d0ba6\x2d39e6\x2dceaeb8e7e153.mount: Deactivated successfully. Dec 13 13:28:50.744051 systemd[1]: run-netns-cni\x2d326dadaf\x2dc610\x2d68f7\x2dc86f\x2dad041b8af27b.mount: Deactivated successfully. Dec 13 13:28:50.836672 kubelet[2617]: I1213 13:28:50.836640 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:28:50.837269 kubelet[2617]: E1213 13:28:50.837010 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.023 [INFO][4767] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.050 [INFO][4767] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0 calico-apiserver-7dc76d6748- calico-apiserver 4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e 685 0 2024-12-13 13:28:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc76d6748 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dc76d6748-ck574 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccb0ef97f57 [] []}} ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.052 [INFO][4767] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.390 [INFO][4846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" HandleID="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Workload="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.419 [INFO][4846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" HandleID="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Workload="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038b8a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dc76d6748-ck574", "timestamp":"2024-12-13 13:28:50.390974237 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.419 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.419 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.419 [INFO][4846] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.422 [INFO][4846] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.426 [INFO][4846] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.476 [INFO][4846] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.532 [INFO][4846] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.534 [INFO][4846] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.534 [INFO][4846] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.535 [INFO][4846] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5 Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.614 [INFO][4846] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.681 [INFO][4846] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.681 [INFO][4846] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" host="localhost" Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.681 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:50.912965 containerd[1498]: 2024-12-13 13:28:50.681 [INFO][4846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" HandleID="k8s-pod-network.c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Workload="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" Dec 13 13:28:50.914071 containerd[1498]: 2024-12-13 13:28:50.686 [INFO][4767] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0", GenerateName:"calico-apiserver-7dc76d6748-", Namespace:"calico-apiserver", SelfLink:"", UID:"4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc76d6748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dc76d6748-ck574", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccb0ef97f57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:50.914071 containerd[1498]: 2024-12-13 13:28:50.686 [INFO][4767] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" Dec 13 13:28:50.914071 containerd[1498]: 2024-12-13 13:28:50.686 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccb0ef97f57 ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" Dec 13 13:28:50.914071 containerd[1498]: 2024-12-13 13:28:50.698 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" Dec 13 13:28:50.914071 containerd[1498]: 2024-12-13 13:28:50.699 [INFO][4767] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0", GenerateName:"calico-apiserver-7dc76d6748-", Namespace:"calico-apiserver", SelfLink:"", UID:"4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc76d6748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5", Pod:"calico-apiserver-7dc76d6748-ck574", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccb0ef97f57", MAC:"0e:40:85:23:52:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:50.914071 containerd[1498]: 2024-12-13 13:28:50.910 [INFO][4767] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-ck574" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--ck574-eth0" Dec 13 13:28:50.944978 systemd-networkd[1424]: cali72c738b0591: Link UP Dec 13 13:28:50.945694 systemd-networkd[1424]: cali72c738b0591: Gained carrier Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:49.977 [INFO][4744] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.012 [INFO][4744] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0 coredns-6f6b679f8f- kube-system cf27bd25-cb26-447c-8c84-23dc77a1d6bc 681 0 2024-12-13 13:28:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hx7ds eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72c738b0591 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.012 [INFO][4744] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.396 [INFO][4831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" HandleID="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Workload="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.419 [INFO][4831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" HandleID="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Workload="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374650), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hx7ds", "timestamp":"2024-12-13 13:28:50.396540104 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.419 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.681 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.681 [INFO][4831] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.684 [INFO][4831] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.690 [INFO][4831] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.694 [INFO][4831] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.697 [INFO][4831] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.699 [INFO][4831] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.699 [INFO][4831] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.702 [INFO][4831] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.751 [INFO][4831] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.937 [INFO][4831] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.938 [INFO][4831] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" host="localhost" Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.938 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:51.004614 containerd[1498]: 2024-12-13 13:28:50.938 [INFO][4831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" HandleID="k8s-pod-network.6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Workload="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" Dec 13 13:28:51.005294 containerd[1498]: 2024-12-13 13:28:50.942 [INFO][4744] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cf27bd25-cb26-447c-8c84-23dc77a1d6bc", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hx7ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72c738b0591", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.005294 containerd[1498]: 2024-12-13 13:28:50.942 [INFO][4744] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" Dec 13 13:28:51.005294 containerd[1498]: 2024-12-13 13:28:50.942 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72c738b0591 ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" Dec 13 13:28:51.005294 containerd[1498]: 2024-12-13 13:28:50.944 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" Dec 13 13:28:51.005294 containerd[1498]: 2024-12-13 13:28:50.944 [INFO][4744] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cf27bd25-cb26-447c-8c84-23dc77a1d6bc", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df", Pod:"coredns-6f6b679f8f-hx7ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72c738b0591", MAC:"4a:cf:32:3e:f1:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.005294 containerd[1498]: 2024-12-13 13:28:51.000 [INFO][4744] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df" Namespace="kube-system" Pod="coredns-6f6b679f8f-hx7ds" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hx7ds-eth0" Dec 13 13:28:51.044447 containerd[1498]: time="2024-12-13T13:28:51.044337865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:51.044447 containerd[1498]: time="2024-12-13T13:28:51.044405682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:51.044447 containerd[1498]: time="2024-12-13T13:28:51.044422603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.045110 containerd[1498]: time="2024-12-13T13:28:51.044968651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.075973 systemd[1]: Started cri-containerd-c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5.scope - libcontainer container c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5. Dec 13 13:28:51.088191 containerd[1498]: time="2024-12-13T13:28:51.087325299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:51.088191 containerd[1498]: time="2024-12-13T13:28:51.087990110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:51.088191 containerd[1498]: time="2024-12-13T13:28:51.088003887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.088191 containerd[1498]: time="2024-12-13T13:28:51.088081452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.089475 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:51.113013 systemd[1]: Started cri-containerd-6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df.scope - libcontainer container 6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df. Dec 13 13:28:51.124203 containerd[1498]: time="2024-12-13T13:28:51.124116134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-ck574,Uid:4fbe5ee0-22f5-48d1-ae2e-a2288a7c8d4e,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5\"" Dec 13 13:28:51.127966 containerd[1498]: time="2024-12-13T13:28:51.127887321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:28:51.131684 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:51.155810 containerd[1498]: time="2024-12-13T13:28:51.155771325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hx7ds,Uid:cf27bd25-cb26-447c-8c84-23dc77a1d6bc,Namespace:kube-system,Attempt:6,} returns sandbox id \"6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df\"" Dec 13 13:28:51.156376 kubelet[2617]: E1213 13:28:51.156354 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:51.157982 containerd[1498]: time="2024-12-13T13:28:51.157948131Z" level=info msg="CreateContainer within sandbox \"6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:28:51.219968 containerd[1498]: time="2024-12-13T13:28:51.219875288Z" level=info msg="CreateContainer within sandbox \"6e7d5c0df303d359c385f855290f5a6232ca5823633aa58458d7aa7063cf25df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbddb06f8a1cc7697bfaaab907b63606a1adaa4afba9c1e074043dcf55c9275a\"" Dec 13 13:28:51.221530 containerd[1498]: time="2024-12-13T13:28:51.221379158Z" level=info msg="StartContainer for \"cbddb06f8a1cc7697bfaaab907b63606a1adaa4afba9c1e074043dcf55c9275a\"" Dec 13 13:28:51.254668 systemd-networkd[1424]: calia2e5ea28b83: Link UP Dec 13 13:28:51.256101 systemd[1]: Started cri-containerd-cbddb06f8a1cc7697bfaaab907b63606a1adaa4afba9c1e074043dcf55c9275a.scope - libcontainer container cbddb06f8a1cc7697bfaaab907b63606a1adaa4afba9c1e074043dcf55c9275a. Dec 13 13:28:51.258901 systemd-networkd[1424]: calia2e5ea28b83: Gained carrier Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.048 [INFO][4780] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.070 [INFO][4780] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sqmg9-eth0 csi-node-driver- calico-system 169b145c-9dd2-4ef7-8f30-2acc264f69a4 589 0 2024-12-13 13:28:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sqmg9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia2e5ea28b83 [] []}} ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.070 [INFO][4780] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-eth0" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.393 [INFO][4855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" HandleID="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Workload="localhost-k8s-csi--node--driver--sqmg9-eth0" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.420 [INFO][4855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" HandleID="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Workload="localhost-k8s-csi--node--driver--sqmg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sqmg9", "timestamp":"2024-12-13 13:28:50.393115277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.420 [INFO][4855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.938 [INFO][4855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.938 [INFO][4855] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:50.974 [INFO][4855] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.071 [INFO][4855] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.119 [INFO][4855] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.188 [INFO][4855] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.192 [INFO][4855] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.192 [INFO][4855] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.212 [INFO][4855] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5 Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.224 [INFO][4855] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.232 [INFO][4855] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.232 [INFO][4855] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" host="localhost" Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.232 [INFO][4855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:51.277673 containerd[1498]: 2024-12-13 13:28:51.232 [INFO][4855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" HandleID="k8s-pod-network.0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Workload="localhost-k8s-csi--node--driver--sqmg9-eth0" Dec 13 13:28:51.278555 containerd[1498]: 2024-12-13 13:28:51.237 [INFO][4780] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sqmg9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"169b145c-9dd2-4ef7-8f30-2acc264f69a4", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sqmg9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2e5ea28b83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.278555 containerd[1498]: 2024-12-13 13:28:51.239 [INFO][4780] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-eth0" Dec 13 13:28:51.278555 containerd[1498]: 2024-12-13 13:28:51.239 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2e5ea28b83 ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-eth0" Dec 13 13:28:51.278555 containerd[1498]: 2024-12-13 13:28:51.259 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-eth0" Dec 13 13:28:51.278555 containerd[1498]: 2024-12-13 13:28:51.260 [INFO][4780] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sqmg9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"169b145c-9dd2-4ef7-8f30-2acc264f69a4", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5", Pod:"csi-node-driver-sqmg9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2e5ea28b83", MAC:"62:06:8d:e7:c3:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.278555 containerd[1498]: 2024-12-13 13:28:51.274 [INFO][4780] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5" Namespace="calico-system" Pod="csi-node-driver-sqmg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--sqmg9-eth0" Dec 13 13:28:51.295949 systemd-networkd[1424]: calide03774dd1d: Link UP Dec 13 13:28:51.296952 systemd-networkd[1424]: calide03774dd1d: Gained carrier Dec 13 13:28:51.298119 containerd[1498]: time="2024-12-13T13:28:51.298021336Z" level=info msg="StartContainer for \"cbddb06f8a1cc7697bfaaab907b63606a1adaa4afba9c1e074043dcf55c9275a\" returns successfully" Dec 13 13:28:51.310301 containerd[1498]: time="2024-12-13T13:28:51.310075360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:51.310301 containerd[1498]: time="2024-12-13T13:28:51.310125975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:51.310301 containerd[1498]: time="2024-12-13T13:28:51.310139461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.310301 containerd[1498]: time="2024-12-13T13:28:51.310229089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:50.024 [INFO][4798] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:50.041 [INFO][4798] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0 calico-kube-controllers-6579dc67df- calico-system ecc6ca41-b371-43b9-9b23-fe21358ee632 682 0 2024-12-13 13:28:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6579dc67df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6579dc67df-9c4m8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calide03774dd1d [] []}} ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:50.041 [INFO][4798] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:50.390 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" HandleID="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Workload="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:50.423 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" HandleID="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Workload="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6579dc67df-9c4m8", "timestamp":"2024-12-13 13:28:50.390853309 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:50.423 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.232 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.232 [INFO][4838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.237 [INFO][4838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.243 [INFO][4838] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.255 [INFO][4838] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.260 [INFO][4838] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.265 [INFO][4838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.265 [INFO][4838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.271 [INFO][4838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498 Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.277 [INFO][4838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.284 [INFO][4838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.284 [INFO][4838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" host="localhost" Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.284 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:51.321062 containerd[1498]: 2024-12-13 13:28:51.284 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" HandleID="k8s-pod-network.2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Workload="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" Dec 13 13:28:51.321680 containerd[1498]: 2024-12-13 13:28:51.289 [INFO][4798] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0", GenerateName:"calico-kube-controllers-6579dc67df-", Namespace:"calico-system", SelfLink:"", UID:"ecc6ca41-b371-43b9-9b23-fe21358ee632", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6579dc67df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6579dc67df-9c4m8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide03774dd1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.321680 containerd[1498]: 2024-12-13 13:28:51.289 [INFO][4798] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" Dec 13 13:28:51.321680 containerd[1498]: 2024-12-13 13:28:51.290 [INFO][4798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide03774dd1d ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" Dec 13 13:28:51.321680 containerd[1498]: 2024-12-13 13:28:51.297 [INFO][4798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" Dec 13 13:28:51.321680 containerd[1498]: 2024-12-13 13:28:51.298 [INFO][4798] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0", GenerateName:"calico-kube-controllers-6579dc67df-", Namespace:"calico-system", SelfLink:"", UID:"ecc6ca41-b371-43b9-9b23-fe21358ee632", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6579dc67df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498", Pod:"calico-kube-controllers-6579dc67df-9c4m8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide03774dd1d", MAC:"32:be:0a:f9:90:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.321680 containerd[1498]: 2024-12-13 13:28:51.316 [INFO][4798] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498" Namespace="calico-system" Pod="calico-kube-controllers-6579dc67df-9c4m8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6579dc67df--9c4m8-eth0" Dec 13 13:28:51.326661 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:45190.service - OpenSSH per-connection server daemon (10.0.0.1:45190). Dec 13 13:28:51.333232 systemd[1]: Started cri-containerd-0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5.scope - libcontainer container 0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5. Dec 13 13:28:51.355960 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:51.394512 containerd[1498]: time="2024-12-13T13:28:51.394480457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sqmg9,Uid:169b145c-9dd2-4ef7-8f30-2acc264f69a4,Namespace:calico-system,Attempt:5,} returns sandbox id \"0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5\"" Dec 13 13:28:51.394959 systemd-networkd[1424]: calic56da11440a: Link UP Dec 13 13:28:51.395206 systemd-networkd[1424]: calic56da11440a: Gained carrier Dec 13 13:28:51.398129 containerd[1498]: time="2024-12-13T13:28:51.396609815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:51.398129 containerd[1498]: time="2024-12-13T13:28:51.396670930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:51.398129 containerd[1498]: time="2024-12-13T13:28:51.396681199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.398129 containerd[1498]: time="2024-12-13T13:28:51.396768403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.400913 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 45190 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:51.402632 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:49.982 [INFO][4755] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:50.003 [INFO][4755] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0 calico-apiserver-7dc76d6748- calico-apiserver 27f04941-3371-4139-bdc8-8fa4b3ff5199 684 0 2024-12-13 13:28:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc76d6748 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dc76d6748-5xdq6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic56da11440a [] []}} ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:50.003 [INFO][4755] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:50.400 [INFO][4820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" HandleID="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Workload="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:50.424 [INFO][4820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" HandleID="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Workload="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003320a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dc76d6748-5xdq6", "timestamp":"2024-12-13 13:28:50.400583665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:50.424 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.284 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.284 [INFO][4820] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.337 [INFO][4820] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.344 [INFO][4820] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.357 [INFO][4820] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.359 [INFO][4820] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.361 [INFO][4820] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.361 [INFO][4820] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.362 [INFO][4820] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.366 [INFO][4820] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.378 [INFO][4820] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.378 [INFO][4820] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" host="localhost" Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.378 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:51.409137 containerd[1498]: 2024-12-13 13:28:51.378 [INFO][4820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" HandleID="k8s-pod-network.53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Workload="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" Dec 13 13:28:51.409717 containerd[1498]: 2024-12-13 13:28:51.384 [INFO][4755] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0", GenerateName:"calico-apiserver-7dc76d6748-", Namespace:"calico-apiserver", SelfLink:"", UID:"27f04941-3371-4139-bdc8-8fa4b3ff5199", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc76d6748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dc76d6748-5xdq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic56da11440a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.409717 containerd[1498]: 2024-12-13 13:28:51.384 [INFO][4755] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" Dec 13 13:28:51.409717 containerd[1498]: 2024-12-13 13:28:51.384 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic56da11440a ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" Dec 13 13:28:51.409717 containerd[1498]: 2024-12-13 13:28:51.397 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" Dec 13 13:28:51.409717 containerd[1498]: 2024-12-13 13:28:51.397 [INFO][4755] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0", GenerateName:"calico-apiserver-7dc76d6748-", Namespace:"calico-apiserver", SelfLink:"", UID:"27f04941-3371-4139-bdc8-8fa4b3ff5199", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc76d6748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f", Pod:"calico-apiserver-7dc76d6748-5xdq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic56da11440a", MAC:"12:f1:c7:5b:5e:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.409717 containerd[1498]: 2024-12-13 13:28:51.406 [INFO][4755] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f" Namespace="calico-apiserver" Pod="calico-apiserver-7dc76d6748-5xdq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc76d6748--5xdq6-eth0" Dec 13 13:28:51.413399 systemd-logind[1484]: New session 10 of user core. Dec 13 13:28:51.417056 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:28:51.422202 systemd[1]: Started cri-containerd-2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498.scope - libcontainer container 2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498. Dec 13 13:28:51.432847 containerd[1498]: time="2024-12-13T13:28:51.432688369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:51.432847 containerd[1498]: time="2024-12-13T13:28:51.432770794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:51.433087 containerd[1498]: time="2024-12-13T13:28:51.432979066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.434147 containerd[1498]: time="2024-12-13T13:28:51.433970812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.441221 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:51.454067 systemd[1]: Started cri-containerd-53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f.scope - libcontainer container 53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f. Dec 13 13:28:51.470126 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:51.470468 containerd[1498]: time="2024-12-13T13:28:51.469747198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6579dc67df-9c4m8,Uid:ecc6ca41-b371-43b9-9b23-fe21358ee632,Namespace:calico-system,Attempt:6,} returns sandbox id \"2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498\"" Dec 13 13:28:51.484290 systemd-networkd[1424]: calif35a8aa7cbd: Link UP Dec 13 13:28:51.484675 systemd-networkd[1424]: calif35a8aa7cbd: Gained carrier Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:50.029 [INFO][4783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:50.057 [INFO][4783] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--69bzb-eth0 coredns-6f6b679f8f- kube-system a034a49a-558f-4b5e-a56a-f248911e85ff 677 0 2024-12-13 13:28:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-69bzb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif35a8aa7cbd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:50.057 [INFO][4783] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:50.419 [INFO][4847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" HandleID="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Workload="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:50.428 [INFO][4847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" HandleID="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Workload="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028c250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-69bzb", "timestamp":"2024-12-13 13:28:50.419675751 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:50.428 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.378 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.378 [INFO][4847] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.440 [INFO][4847] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.444 [INFO][4847] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.451 [INFO][4847] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.453 [INFO][4847] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.455 [INFO][4847] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.455 [INFO][4847] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.457 [INFO][4847] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.463 [INFO][4847] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.470 [INFO][4847] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.470 [INFO][4847] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" host="localhost" Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.470 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:51.505566 containerd[1498]: 2024-12-13 13:28:51.470 [INFO][4847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" HandleID="k8s-pod-network.8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Workload="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" Dec 13 13:28:51.506216 containerd[1498]: 2024-12-13 13:28:51.475 [INFO][4783] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--69bzb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a034a49a-558f-4b5e-a56a-f248911e85ff", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-69bzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif35a8aa7cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.506216 containerd[1498]: 2024-12-13 13:28:51.475 [INFO][4783] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" Dec 13 13:28:51.506216 containerd[1498]: 2024-12-13 13:28:51.476 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif35a8aa7cbd ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" Dec 13 13:28:51.506216 containerd[1498]: 2024-12-13 13:28:51.484 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" Dec 13 13:28:51.506216 containerd[1498]: 2024-12-13 13:28:51.484 [INFO][4783] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--69bzb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a034a49a-558f-4b5e-a56a-f248911e85ff", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa", Pod:"coredns-6f6b679f8f-69bzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif35a8aa7cbd", MAC:"3a:01:38:d1:f8:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:51.506216 containerd[1498]: 2024-12-13 13:28:51.496 [INFO][4783] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa" Namespace="kube-system" Pod="coredns-6f6b679f8f-69bzb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--69bzb-eth0" Dec 13 13:28:51.514636 containerd[1498]: time="2024-12-13T13:28:51.514059867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc76d6748-5xdq6,Uid:27f04941-3371-4139-bdc8-8fa4b3ff5199,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f\"" Dec 13 13:28:51.537759 containerd[1498]: time="2024-12-13T13:28:51.536269903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:51.537759 containerd[1498]: time="2024-12-13T13:28:51.536324625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:51.537759 containerd[1498]: time="2024-12-13T13:28:51.536338562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.537759 containerd[1498]: time="2024-12-13T13:28:51.536423592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:51.558977 systemd[1]: Started cri-containerd-8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa.scope - libcontainer container 8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa. Dec 13 13:28:51.576895 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:28:51.620861 sshd[5233]: Connection closed by 10.0.0.1 port 45190 Dec 13 13:28:51.618608 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:51.623422 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:45190.service: Deactivated successfully. Dec 13 13:28:51.625571 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:28:51.626412 containerd[1498]: time="2024-12-13T13:28:51.626383031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69bzb,Uid:a034a49a-558f-4b5e-a56a-f248911e85ff,Namespace:kube-system,Attempt:6,} returns sandbox id \"8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa\"" Dec 13 13:28:51.627643 kubelet[2617]: E1213 13:28:51.627306 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:51.627799 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:28:51.630906 containerd[1498]: time="2024-12-13T13:28:51.630801457Z" level=info msg="CreateContainer within sandbox \"8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:28:51.631064 systemd-logind[1484]: Removed session 10. Dec 13 13:28:51.652092 containerd[1498]: time="2024-12-13T13:28:51.652043160Z" level=info msg="CreateContainer within sandbox \"8cd9c349475f217c29b77df1508f3699e5a0d8d466e23c26e4bf45e879fcf6aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56c1ce1ba606616d5967f455091290b8d72e51ab3cd2583dd239dad48336b489\"" Dec 13 13:28:51.652514 containerd[1498]: time="2024-12-13T13:28:51.652478840Z" level=info msg="StartContainer for \"56c1ce1ba606616d5967f455091290b8d72e51ab3cd2583dd239dad48336b489\"" Dec 13 13:28:51.684986 systemd[1]: Started cri-containerd-56c1ce1ba606616d5967f455091290b8d72e51ab3cd2583dd239dad48336b489.scope - libcontainer container 56c1ce1ba606616d5967f455091290b8d72e51ab3cd2583dd239dad48336b489. Dec 13 13:28:51.724934 containerd[1498]: time="2024-12-13T13:28:51.724894575Z" level=info msg="StartContainer for \"56c1ce1ba606616d5967f455091290b8d72e51ab3cd2583dd239dad48336b489\" returns successfully" Dec 13 13:28:51.848015 kubelet[2617]: E1213 13:28:51.847602 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:51.852249 kubelet[2617]: E1213 13:28:51.852226 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:51.868497 kubelet[2617]: I1213 13:28:51.868282 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hx7ds" podStartSLOduration=25.868265096000002 podStartE2EDuration="25.868265096s" podCreationTimestamp="2024-12-13 13:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:51.867880562 +0000 UTC m=+30.733993283" watchObservedRunningTime="2024-12-13 13:28:51.868265096 +0000 UTC m=+30.734377817" Dec 13 13:28:51.869596 systemd-networkd[1424]: caliccb0ef97f57: Gained IPv6LL Dec 13 13:28:51.872113 kubelet[2617]: I1213 13:28:51.872032 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-69bzb" podStartSLOduration=25.872018289 podStartE2EDuration="25.872018289s" podCreationTimestamp="2024-12-13 13:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:51.857811924 +0000 UTC m=+30.723924645" watchObservedRunningTime="2024-12-13 13:28:51.872018289 +0000 UTC m=+30.738131010" Dec 13 13:28:52.573370 systemd-networkd[1424]: calic56da11440a: Gained IPv6LL Dec 13 13:28:52.636984 systemd-networkd[1424]: cali72c738b0591: Gained IPv6LL Dec 13 13:28:52.701011 systemd-networkd[1424]: calif35a8aa7cbd: Gained IPv6LL Dec 13 13:28:52.765983 systemd-networkd[1424]: calia2e5ea28b83: Gained IPv6LL Dec 13 13:28:52.829096 systemd-networkd[1424]: calide03774dd1d: Gained IPv6LL Dec 13 13:28:52.857455 kubelet[2617]: E1213 13:28:52.857417 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:52.857455 kubelet[2617]: E1213 13:28:52.857454 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:53.575936 containerd[1498]: time="2024-12-13T13:28:53.575880890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:53.576762 containerd[1498]: time="2024-12-13T13:28:53.576718777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 13:28:53.577925 containerd[1498]: time="2024-12-13T13:28:53.577897253Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:53.580190 containerd[1498]: time="2024-12-13T13:28:53.580151313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:53.580797 containerd[1498]: time="2024-12-13T13:28:53.580758857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.452837932s" Dec 13 13:28:53.580797 containerd[1498]: time="2024-12-13T13:28:53.580785967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:28:53.581986 containerd[1498]: time="2024-12-13T13:28:53.581619275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:28:53.582600 containerd[1498]: time="2024-12-13T13:28:53.582579390Z" level=info msg="CreateContainer within sandbox \"c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:28:53.596652 containerd[1498]: time="2024-12-13T13:28:53.596604704Z" level=info msg="CreateContainer within sandbox \"c885073466094109820790a6dd2e7d3ad0beb4ae9762927294ef86dfb39351e5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e2d361236439eaa87ade5eb17534c8bea50585c7b3592d9dfc10a0d44871656c\"" Dec 13 13:28:53.597298 containerd[1498]: time="2024-12-13T13:28:53.597271749Z" level=info msg="StartContainer for \"e2d361236439eaa87ade5eb17534c8bea50585c7b3592d9dfc10a0d44871656c\"" Dec 13 13:28:53.638049 systemd[1]: Started cri-containerd-e2d361236439eaa87ade5eb17534c8bea50585c7b3592d9dfc10a0d44871656c.scope - libcontainer container e2d361236439eaa87ade5eb17534c8bea50585c7b3592d9dfc10a0d44871656c. Dec 13 13:28:53.882395 containerd[1498]: time="2024-12-13T13:28:53.882237659Z" level=info msg="StartContainer for \"e2d361236439eaa87ade5eb17534c8bea50585c7b3592d9dfc10a0d44871656c\" returns successfully" Dec 13 13:28:53.887611 kubelet[2617]: E1213 13:28:53.887396 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:53.887985 kubelet[2617]: E1213 13:28:53.887858 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:28:53.897574 kubelet[2617]: I1213 13:28:53.897511 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dc76d6748-ck574" podStartSLOduration=18.443573639 podStartE2EDuration="20.897489549s" podCreationTimestamp="2024-12-13 13:28:33 +0000 UTC" firstStartedPulling="2024-12-13 13:28:51.127580043 +0000 UTC m=+29.993692764" lastFinishedPulling="2024-12-13 13:28:53.581495953 +0000 UTC m=+32.447608674" observedRunningTime="2024-12-13 13:28:53.89707012 +0000 UTC m=+32.763182841" watchObservedRunningTime="2024-12-13 13:28:53.897489549 +0000 UTC m=+32.763602270" Dec 13 13:28:54.889442 kubelet[2617]: I1213 13:28:54.889394 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:28:55.651980 containerd[1498]: time="2024-12-13T13:28:55.651913296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:55.653566 containerd[1498]: time="2024-12-13T13:28:55.653509677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:28:55.656451 containerd[1498]: time="2024-12-13T13:28:55.656397348Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:55.659205 containerd[1498]: time="2024-12-13T13:28:55.659168388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:55.659759 containerd[1498]: time="2024-12-13T13:28:55.659716669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.078073179s" Dec 13 13:28:55.659759 containerd[1498]: time="2024-12-13T13:28:55.659753489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:28:55.660939 containerd[1498]: time="2024-12-13T13:28:55.660900826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 13:28:55.661881 containerd[1498]: time="2024-12-13T13:28:55.661824733Z" level=info msg="CreateContainer within sandbox \"0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:28:55.691361 containerd[1498]: time="2024-12-13T13:28:55.691311191Z" level=info msg="CreateContainer within sandbox \"0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0217046e9cecc6bd609b127cf2ca63594b258dce077812b52a3571bc3f0fcb1b\"" Dec 13 13:28:55.692151 containerd[1498]: time="2024-12-13T13:28:55.692090977Z" level=info msg="StartContainer for \"0217046e9cecc6bd609b127cf2ca63594b258dce077812b52a3571bc3f0fcb1b\"" Dec 13 13:28:55.724012 systemd[1]: Started cri-containerd-0217046e9cecc6bd609b127cf2ca63594b258dce077812b52a3571bc3f0fcb1b.scope - libcontainer container 0217046e9cecc6bd609b127cf2ca63594b258dce077812b52a3571bc3f0fcb1b. Dec 13 13:28:55.759571 containerd[1498]: time="2024-12-13T13:28:55.759523831Z" level=info msg="StartContainer for \"0217046e9cecc6bd609b127cf2ca63594b258dce077812b52a3571bc3f0fcb1b\" returns successfully" Dec 13 13:28:56.632798 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:57992.service - OpenSSH per-connection server daemon (10.0.0.1:57992). Dec 13 13:28:56.690589 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 57992 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:56.692419 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:56.696488 systemd-logind[1484]: New session 11 of user core. Dec 13 13:28:56.712949 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:28:56.840363 sshd[5605]: Connection closed by 10.0.0.1 port 57992 Dec 13 13:28:56.840860 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:56.844595 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:57992.service: Deactivated successfully. Dec 13 13:28:56.846684 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:28:56.847406 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:28:56.848989 systemd-logind[1484]: Removed session 11. Dec 13 13:28:57.441379 containerd[1498]: time="2024-12-13T13:28:57.441307311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:57.442111 containerd[1498]: time="2024-12-13T13:28:57.442062680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 13:28:57.443113 containerd[1498]: time="2024-12-13T13:28:57.443089711Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:57.445106 containerd[1498]: time="2024-12-13T13:28:57.445082217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:57.445591 containerd[1498]: time="2024-12-13T13:28:57.445562330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.784626979s" Dec 13 13:28:57.445641 containerd[1498]: time="2024-12-13T13:28:57.445589841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 13:28:57.446439 containerd[1498]: time="2024-12-13T13:28:57.446420392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:28:57.454164 containerd[1498]: time="2024-12-13T13:28:57.454114306Z" level=info msg="CreateContainer within sandbox \"2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 13:28:57.467994 containerd[1498]: time="2024-12-13T13:28:57.467948362Z" level=info msg="CreateContainer within sandbox \"2f45b4e7f60dd5183dc9181a856fe40c01d124ff00ac78dc007e73d66cb3f498\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bb3731ef1352f352824c04908771350c760a67e856a39762f5826d820ed061e4\"" Dec 13 13:28:57.468405 containerd[1498]: time="2024-12-13T13:28:57.468382829Z" level=info msg="StartContainer for \"bb3731ef1352f352824c04908771350c760a67e856a39762f5826d820ed061e4\"" Dec 13 13:28:57.497948 systemd[1]: Started cri-containerd-bb3731ef1352f352824c04908771350c760a67e856a39762f5826d820ed061e4.scope - libcontainer container bb3731ef1352f352824c04908771350c760a67e856a39762f5826d820ed061e4. Dec 13 13:28:57.537667 containerd[1498]: time="2024-12-13T13:28:57.537615868Z" level=info msg="StartContainer for \"bb3731ef1352f352824c04908771350c760a67e856a39762f5826d820ed061e4\" returns successfully" Dec 13 13:28:57.916337 kubelet[2617]: I1213 13:28:57.916268 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6579dc67df-9c4m8" podStartSLOduration=18.941523652 podStartE2EDuration="24.916248711s" podCreationTimestamp="2024-12-13 13:28:33 +0000 UTC" firstStartedPulling="2024-12-13 13:28:51.471578965 +0000 UTC m=+30.337691686" lastFinishedPulling="2024-12-13 13:28:57.446304024 +0000 UTC m=+36.312416745" observedRunningTime="2024-12-13 13:28:57.915182536 +0000 UTC m=+36.781295257" watchObservedRunningTime="2024-12-13 13:28:57.916248711 +0000 UTC m=+36.782361432" Dec 13 13:28:57.928266 containerd[1498]: time="2024-12-13T13:28:57.928208824Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:57.929221 containerd[1498]: time="2024-12-13T13:28:57.929174599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 13:28:57.931949 containerd[1498]: time="2024-12-13T13:28:57.931904953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 485.458652ms" Dec 13 13:28:57.931949 containerd[1498]: time="2024-12-13T13:28:57.931940459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:28:57.933902 containerd[1498]: time="2024-12-13T13:28:57.933878162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:28:57.934629 containerd[1498]: time="2024-12-13T13:28:57.934597544Z" level=info msg="CreateContainer within sandbox \"53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:28:57.952066 containerd[1498]: time="2024-12-13T13:28:57.952023072Z" level=info msg="CreateContainer within sandbox \"53932b083ba128492df870cca8c625834f346a19983ef8b10a3d073436edc56f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"756c9ff13bb3819e75ddf6ea9f59b819713c66001d318e119d66d9c8f8860d54\"" Dec 13 13:28:57.952564 containerd[1498]: time="2024-12-13T13:28:57.952524164Z" level=info msg="StartContainer for \"756c9ff13bb3819e75ddf6ea9f59b819713c66001d318e119d66d9c8f8860d54\"" Dec 13 13:28:57.981985 systemd[1]: Started cri-containerd-756c9ff13bb3819e75ddf6ea9f59b819713c66001d318e119d66d9c8f8860d54.scope - libcontainer container 756c9ff13bb3819e75ddf6ea9f59b819713c66001d318e119d66d9c8f8860d54. Dec 13 13:28:58.023412 containerd[1498]: time="2024-12-13T13:28:58.023361264Z" level=info msg="StartContainer for \"756c9ff13bb3819e75ddf6ea9f59b819713c66001d318e119d66d9c8f8860d54\" returns successfully" Dec 13 13:28:58.912771 kubelet[2617]: I1213 13:28:58.912257 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:28:58.921867 kubelet[2617]: I1213 13:28:58.921559 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dc76d6748-5xdq6" podStartSLOduration=19.506432481 podStartE2EDuration="25.92153798s" podCreationTimestamp="2024-12-13 13:28:33 +0000 UTC" firstStartedPulling="2024-12-13 13:28:51.517469143 +0000 UTC m=+30.383581864" lastFinishedPulling="2024-12-13 13:28:57.932574642 +0000 UTC m=+36.798687363" observedRunningTime="2024-12-13 13:28:58.921279554 +0000 UTC m=+37.787392275" watchObservedRunningTime="2024-12-13 13:28:58.92153798 +0000 UTC m=+37.787650701" Dec 13 13:28:59.851161 containerd[1498]: time="2024-12-13T13:28:59.851095013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:59.851896 containerd[1498]: time="2024-12-13T13:28:59.851856243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:28:59.852920 containerd[1498]: time="2024-12-13T13:28:59.852880328Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:59.854938 containerd[1498]: time="2024-12-13T13:28:59.854889984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:59.855548 containerd[1498]: time="2024-12-13T13:28:59.855516542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.921609547s" Dec 13 13:28:59.855596 containerd[1498]: time="2024-12-13T13:28:59.855548783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:28:59.857498 containerd[1498]: time="2024-12-13T13:28:59.857470204Z" level=info msg="CreateContainer within sandbox \"0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:28:59.874042 containerd[1498]: time="2024-12-13T13:28:59.873995581Z" level=info msg="CreateContainer within sandbox \"0867aa201dfa1c2e2dced4b5da01aa86190caca52365d41bbcb2a9efd5df7db5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b84f490f61d49297226850373850e4ebafa2aabd078cf71f7b33c9e289545b57\"" Dec 13 13:28:59.874462 containerd[1498]: time="2024-12-13T13:28:59.874436219Z" level=info msg="StartContainer for \"b84f490f61d49297226850373850e4ebafa2aabd078cf71f7b33c9e289545b57\"" Dec 13 13:28:59.913026 systemd[1]: Started cri-containerd-b84f490f61d49297226850373850e4ebafa2aabd078cf71f7b33c9e289545b57.scope - libcontainer container b84f490f61d49297226850373850e4ebafa2aabd078cf71f7b33c9e289545b57. Dec 13 13:29:00.027135 containerd[1498]: time="2024-12-13T13:29:00.027071872Z" level=info msg="StartContainer for \"b84f490f61d49297226850373850e4ebafa2aabd078cf71f7b33c9e289545b57\" returns successfully" Dec 13 13:29:00.487009 kubelet[2617]: I1213 13:29:00.486965 2617 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:29:00.487009 kubelet[2617]: I1213 13:29:00.487003 2617 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:29:01.851728 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:58006.service - OpenSSH per-connection server daemon (10.0.0.1:58006). Dec 13 13:29:01.902211 sshd[5871]: Accepted publickey for core from 10.0.0.1 port 58006 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:01.904313 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:01.908724 systemd-logind[1484]: New session 12 of user core. Dec 13 13:29:01.914011 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:29:02.042971 sshd[5873]: Connection closed by 10.0.0.1 port 58006 Dec 13 13:29:02.043296 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:02.046710 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:58006.service: Deactivated successfully. Dec 13 13:29:02.048802 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:29:02.049564 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:29:02.050682 systemd-logind[1484]: Removed session 12. Dec 13 13:29:02.973462 kubelet[2617]: I1213 13:29:02.973417 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:29:03.074030 kubelet[2617]: I1213 13:29:03.073884 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sqmg9" podStartSLOduration=21.613415929 podStartE2EDuration="30.073864597s" podCreationTimestamp="2024-12-13 13:28:33 +0000 UTC" firstStartedPulling="2024-12-13 13:28:51.395889178 +0000 UTC m=+30.262001899" lastFinishedPulling="2024-12-13 13:28:59.856337846 +0000 UTC m=+38.722450567" observedRunningTime="2024-12-13 13:29:00.935303369 +0000 UTC m=+39.801416090" watchObservedRunningTime="2024-12-13 13:29:03.073864597 +0000 UTC m=+41.939977318" Dec 13 13:29:03.489696 kubelet[2617]: I1213 13:29:03.489645 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:29:03.490143 kubelet[2617]: E1213 13:29:03.490124 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:03.651641 kubelet[2617]: I1213 13:29:03.651589 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:29:03.652004 kubelet[2617]: E1213 13:29:03.651981 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:03.930164 kubelet[2617]: E1213 13:29:03.930062 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:04.508869 kernel: bpftool[6061]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:29:04.753621 systemd-networkd[1424]: vxlan.calico: Link UP Dec 13 13:29:04.753634 systemd-networkd[1424]: vxlan.calico: Gained carrier Dec 13 13:29:06.269014 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Dec 13 13:29:07.058160 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:42640.service - OpenSSH per-connection server daemon (10.0.0.1:42640). Dec 13 13:29:07.118540 sshd[6147]: Accepted publickey for core from 10.0.0.1 port 42640 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:07.120390 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:07.125278 systemd-logind[1484]: New session 13 of user core. Dec 13 13:29:07.141004 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:29:07.280610 sshd[6149]: Connection closed by 10.0.0.1 port 42640 Dec 13 13:29:07.281099 sshd-session[6147]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:07.296722 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:42640.service: Deactivated successfully. Dec 13 13:29:07.298525 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:29:07.300116 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:29:07.306110 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:42646.service - OpenSSH per-connection server daemon (10.0.0.1:42646). Dec 13 13:29:07.307000 systemd-logind[1484]: Removed session 13. Dec 13 13:29:07.347054 sshd[6162]: Accepted publickey for core from 10.0.0.1 port 42646 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:07.348667 sshd-session[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:07.352740 systemd-logind[1484]: New session 14 of user core. Dec 13 13:29:07.360968 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:29:07.613216 sshd[6164]: Connection closed by 10.0.0.1 port 42646 Dec 13 13:29:07.613669 sshd-session[6162]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:07.625436 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:42646.service: Deactivated successfully. Dec 13 13:29:07.627753 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:29:07.629267 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:29:07.641131 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:42656.service - OpenSSH per-connection server daemon (10.0.0.1:42656). Dec 13 13:29:07.642005 systemd-logind[1484]: Removed session 14. Dec 13 13:29:07.679729 sshd[6174]: Accepted publickey for core from 10.0.0.1 port 42656 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:07.681651 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:07.686255 systemd-logind[1484]: New session 15 of user core. Dec 13 13:29:07.698987 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:29:07.865530 sshd[6176]: Connection closed by 10.0.0.1 port 42656 Dec 13 13:29:07.867646 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:07.871547 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:42656.service: Deactivated successfully. Dec 13 13:29:07.873556 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:29:07.874630 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:29:07.876415 systemd-logind[1484]: Removed session 15. Dec 13 13:29:12.878605 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:42666.service - OpenSSH per-connection server daemon (10.0.0.1:42666). Dec 13 13:29:12.915920 sshd[6200]: Accepted publickey for core from 10.0.0.1 port 42666 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:12.917472 sshd-session[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:12.921195 systemd-logind[1484]: New session 16 of user core. Dec 13 13:29:12.929972 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:29:13.051438 sshd[6202]: Connection closed by 10.0.0.1 port 42666 Dec 13 13:29:13.051609 sshd-session[6200]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:13.056524 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:42666.service: Deactivated successfully. Dec 13 13:29:13.058572 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:29:13.059207 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:29:13.060284 systemd-logind[1484]: Removed session 16. Dec 13 13:29:18.063313 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:40132.service - OpenSSH per-connection server daemon (10.0.0.1:40132). Dec 13 13:29:18.100754 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 40132 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:18.102010 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:18.106751 systemd-logind[1484]: New session 17 of user core. Dec 13 13:29:18.121031 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:29:18.234789 sshd[6224]: Connection closed by 10.0.0.1 port 40132 Dec 13 13:29:18.235156 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:18.239321 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:40132.service: Deactivated successfully. Dec 13 13:29:18.241372 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:29:18.242030 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:29:18.242902 systemd-logind[1484]: Removed session 17. Dec 13 13:29:21.208427 containerd[1498]: time="2024-12-13T13:29:21.208357640Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:29:21.208998 containerd[1498]: time="2024-12-13T13:29:21.208484518Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:29:21.208998 containerd[1498]: time="2024-12-13T13:29:21.208530998Z" level=info msg="StopPodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:29:21.208998 containerd[1498]: time="2024-12-13T13:29:21.208951298Z" level=info msg="RemovePodSandbox for \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:29:21.218526 containerd[1498]: time="2024-12-13T13:29:21.218478743Z" level=info msg="Forcibly stopping sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\"" Dec 13 13:29:21.218636 containerd[1498]: time="2024-12-13T13:29:21.218590270Z" level=info msg="TearDown network for sandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" successfully" Dec 13 13:29:21.301675 containerd[1498]: time="2024-12-13T13:29:21.301614438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.301898 containerd[1498]: time="2024-12-13T13:29:21.301703923Z" level=info msg="RemovePodSandbox \"8a2987d1d80a2434cdf6456d730058467e5f87e3a53cc20a39ce6af456310fbf\" returns successfully" Dec 13 13:29:21.302349 containerd[1498]: time="2024-12-13T13:29:21.302316247Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:29:21.302479 containerd[1498]: time="2024-12-13T13:29:21.302427654Z" level=info msg="TearDown network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" successfully" Dec 13 13:29:21.302479 containerd[1498]: time="2024-12-13T13:29:21.302439808Z" level=info msg="StopPodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" returns successfully" Dec 13 13:29:21.303386 containerd[1498]: time="2024-12-13T13:29:21.302665518Z" level=info msg="RemovePodSandbox for \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:29:21.303386 containerd[1498]: time="2024-12-13T13:29:21.302687111Z" level=info msg="Forcibly stopping sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\"" Dec 13 13:29:21.303386 containerd[1498]: time="2024-12-13T13:29:21.302750504Z" level=info msg="TearDown network for sandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" successfully" Dec 13 13:29:21.307596 containerd[1498]: time="2024-12-13T13:29:21.307509006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.307596 containerd[1498]: time="2024-12-13T13:29:21.307584854Z" level=info msg="RemovePodSandbox \"e2908d056941176d921ac081d602575f47c9a3c92765ca1afc8a1855a0eec015\" returns successfully" Dec 13 13:29:21.308124 containerd[1498]: time="2024-12-13T13:29:21.308086894Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" Dec 13 13:29:21.308409 containerd[1498]: time="2024-12-13T13:29:21.308300911Z" level=info msg="TearDown network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" successfully" Dec 13 13:29:21.308409 containerd[1498]: time="2024-12-13T13:29:21.308320338Z" level=info msg="StopPodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" returns successfully" Dec 13 13:29:21.308761 containerd[1498]: time="2024-12-13T13:29:21.308728405Z" level=info msg="RemovePodSandbox for \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" Dec 13 13:29:21.308814 containerd[1498]: time="2024-12-13T13:29:21.308762491Z" level=info msg="Forcibly stopping sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\"" Dec 13 13:29:21.308897 containerd[1498]: time="2024-12-13T13:29:21.308860472Z" level=info msg="TearDown network for sandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" successfully" Dec 13 13:29:21.312385 containerd[1498]: time="2024-12-13T13:29:21.312355690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.312450 containerd[1498]: time="2024-12-13T13:29:21.312401729Z" level=info msg="RemovePodSandbox \"a21907390b05bc41353374ecb2a1e62c9f6ccf90819a76ff634efce9923d4ffa\" returns successfully" Dec 13 13:29:21.313160 containerd[1498]: time="2024-12-13T13:29:21.313122476Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\"" Dec 13 13:29:21.313299 containerd[1498]: time="2024-12-13T13:29:21.313268541Z" level=info msg="TearDown network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" successfully" Dec 13 13:29:21.313299 containerd[1498]: time="2024-12-13T13:29:21.313285233Z" level=info msg="StopPodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" returns successfully" Dec 13 13:29:21.313639 containerd[1498]: time="2024-12-13T13:29:21.313575239Z" level=info msg="RemovePodSandbox for \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\"" Dec 13 13:29:21.313639 containerd[1498]: time="2024-12-13T13:29:21.313600999Z" level=info msg="Forcibly stopping sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\"" Dec 13 13:29:21.313785 containerd[1498]: time="2024-12-13T13:29:21.313665996Z" level=info msg="TearDown network for sandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" successfully" Dec 13 13:29:21.317987 containerd[1498]: time="2024-12-13T13:29:21.317949401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.318058 containerd[1498]: time="2024-12-13T13:29:21.317997835Z" level=info msg="RemovePodSandbox \"9a39e2b6726809bb08eaf739fd4c36fb35546b42e10eef2b487018652bd1a6ad\" returns successfully" Dec 13 13:29:21.318699 containerd[1498]: time="2024-12-13T13:29:21.318472592Z" level=info msg="StopPodSandbox for \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\"" Dec 13 13:29:21.318699 containerd[1498]: time="2024-12-13T13:29:21.318613256Z" level=info msg="TearDown network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" successfully" Dec 13 13:29:21.318699 containerd[1498]: time="2024-12-13T13:29:21.318624207Z" level=info msg="StopPodSandbox for \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" returns successfully" Dec 13 13:29:21.319292 containerd[1498]: time="2024-12-13T13:29:21.319275417Z" level=info msg="RemovePodSandbox for \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\"" Dec 13 13:29:21.319393 containerd[1498]: time="2024-12-13T13:29:21.319379561Z" level=info msg="Forcibly stopping sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\"" Dec 13 13:29:21.319533 containerd[1498]: time="2024-12-13T13:29:21.319486048Z" level=info msg="TearDown network for sandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" successfully" Dec 13 13:29:21.323548 containerd[1498]: time="2024-12-13T13:29:21.323497443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.323770 containerd[1498]: time="2024-12-13T13:29:21.323750346Z" level=info msg="RemovePodSandbox \"9968b7ebe76d6d0fe39c5765a6af533ade26da0e24d2a38d4ff05eb1a792bcec\" returns successfully" Dec 13 13:29:21.324207 containerd[1498]: time="2024-12-13T13:29:21.324186437Z" level=info msg="StopPodSandbox for \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\"" Dec 13 13:29:21.324352 containerd[1498]: time="2024-12-13T13:29:21.324335778Z" level=info msg="TearDown network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\" successfully" Dec 13 13:29:21.324557 containerd[1498]: time="2024-12-13T13:29:21.324532833Z" level=info msg="StopPodSandbox for \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\" returns successfully" Dec 13 13:29:21.326669 containerd[1498]: time="2024-12-13T13:29:21.324849050Z" level=info msg="RemovePodSandbox for \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\"" Dec 13 13:29:21.326669 containerd[1498]: time="2024-12-13T13:29:21.324870371Z" level=info msg="Forcibly stopping sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\"" Dec 13 13:29:21.326669 containerd[1498]: time="2024-12-13T13:29:21.324945248Z" level=info msg="TearDown network for sandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\" successfully" Dec 13 13:29:21.329307 containerd[1498]: time="2024-12-13T13:29:21.329273080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.329362 containerd[1498]: time="2024-12-13T13:29:21.329319961Z" level=info msg="RemovePodSandbox \"777ee526b77fefafa47a39c5811dfe12b6283dce01c49d02f5fd6ce3ccfa1d46\" returns successfully" Dec 13 13:29:21.329678 containerd[1498]: time="2024-12-13T13:29:21.329648622Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:29:21.329780 containerd[1498]: time="2024-12-13T13:29:21.329738417Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:29:21.329780 containerd[1498]: time="2024-12-13T13:29:21.329749278Z" level=info msg="StopPodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:29:21.330068 containerd[1498]: time="2024-12-13T13:29:21.330040747Z" level=info msg="RemovePodSandbox for \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:29:21.330068 containerd[1498]: time="2024-12-13T13:29:21.330065435Z" level=info msg="Forcibly stopping sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\"" Dec 13 13:29:21.330188 containerd[1498]: time="2024-12-13T13:29:21.330138086Z" level=info msg="TearDown network for sandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" successfully" Dec 13 13:29:21.333744 containerd[1498]: time="2024-12-13T13:29:21.333710715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.333875 containerd[1498]: time="2024-12-13T13:29:21.333754741Z" level=info msg="RemovePodSandbox \"375da808cda47928bce0285493e63162cf1666fe8e763ae401ebc3092f8d522f\" returns successfully" Dec 13 13:29:21.334124 containerd[1498]: time="2024-12-13T13:29:21.334096758Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:29:21.334218 containerd[1498]: time="2024-12-13T13:29:21.334176884Z" level=info msg="TearDown network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" successfully" Dec 13 13:29:21.334218 containerd[1498]: time="2024-12-13T13:29:21.334186754Z" level=info msg="StopPodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" returns successfully" Dec 13 13:29:21.334417 containerd[1498]: time="2024-12-13T13:29:21.334393918Z" level=info msg="RemovePodSandbox for \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:29:21.334417 containerd[1498]: time="2024-12-13T13:29:21.334414247Z" level=info msg="Forcibly stopping sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\"" Dec 13 13:29:21.334498 containerd[1498]: time="2024-12-13T13:29:21.334473874Z" level=info msg="TearDown network for sandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" successfully" Dec 13 13:29:21.338006 containerd[1498]: time="2024-12-13T13:29:21.337976437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.338061 containerd[1498]: time="2024-12-13T13:29:21.338019019Z" level=info msg="RemovePodSandbox \"7ce247a9e9d486faf30ff853826a06216db5dace0cf195cc78db42a72e2fecbf\" returns successfully" Dec 13 13:29:21.338296 containerd[1498]: time="2024-12-13T13:29:21.338271201Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" Dec 13 13:29:21.338377 containerd[1498]: time="2024-12-13T13:29:21.338346277Z" level=info msg="TearDown network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" successfully" Dec 13 13:29:21.338377 containerd[1498]: time="2024-12-13T13:29:21.338361137Z" level=info msg="StopPodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" returns successfully" Dec 13 13:29:21.338655 containerd[1498]: time="2024-12-13T13:29:21.338629030Z" level=info msg="RemovePodSandbox for \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" Dec 13 13:29:21.338730 containerd[1498]: time="2024-12-13T13:29:21.338658647Z" level=info msg="Forcibly stopping sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\"" Dec 13 13:29:21.338787 containerd[1498]: time="2024-12-13T13:29:21.338745426Z" level=info msg="TearDown network for sandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" successfully" Dec 13 13:29:21.342428 containerd[1498]: time="2024-12-13T13:29:21.342390887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.342480 containerd[1498]: time="2024-12-13T13:29:21.342438610Z" level=info msg="RemovePodSandbox \"12d18c9c8caf0816c4f0e2be01b939ab0d19bd7c6043694b69e328287df2ae7e\" returns successfully" Dec 13 13:29:21.342739 containerd[1498]: time="2024-12-13T13:29:21.342713346Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\"" Dec 13 13:29:21.342813 containerd[1498]: time="2024-12-13T13:29:21.342800767Z" level=info msg="TearDown network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" successfully" Dec 13 13:29:21.342851 containerd[1498]: time="2024-12-13T13:29:21.342812218Z" level=info msg="StopPodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" returns successfully" Dec 13 13:29:21.343080 containerd[1498]: time="2024-12-13T13:29:21.343055363Z" level=info msg="RemovePodSandbox for \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\"" Dec 13 13:29:21.343080 containerd[1498]: time="2024-12-13T13:29:21.343074831Z" level=info msg="Forcibly stopping sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\"" Dec 13 13:29:21.343181 containerd[1498]: time="2024-12-13T13:29:21.343139387Z" level=info msg="TearDown network for sandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" successfully" Dec 13 13:29:21.346513 containerd[1498]: time="2024-12-13T13:29:21.346475494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.346513 containerd[1498]: time="2024-12-13T13:29:21.346509321Z" level=info msg="RemovePodSandbox \"c419042170b5243872620eb2ea0ae166aa4d099a0b3e7c7b8f5674b6bd8cdc4f\" returns successfully" Dec 13 13:29:21.346762 containerd[1498]: time="2024-12-13T13:29:21.346731895Z" level=info msg="StopPodSandbox for \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\"" Dec 13 13:29:21.346843 containerd[1498]: time="2024-12-13T13:29:21.346813564Z" level=info msg="TearDown network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" successfully" Dec 13 13:29:21.346890 containerd[1498]: time="2024-12-13T13:29:21.346827491Z" level=info msg="StopPodSandbox for \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" returns successfully" Dec 13 13:29:21.347131 containerd[1498]: time="2024-12-13T13:29:21.347103991Z" level=info msg="RemovePodSandbox for \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\"" Dec 13 13:29:21.347195 containerd[1498]: time="2024-12-13T13:29:21.347133749Z" level=info msg="Forcibly stopping sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\"" Dec 13 13:29:21.347261 containerd[1498]: time="2024-12-13T13:29:21.347213184Z" level=info msg="TearDown network for sandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" successfully" Dec 13 13:29:21.350929 containerd[1498]: time="2024-12-13T13:29:21.350895396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.350976 containerd[1498]: time="2024-12-13T13:29:21.350950183Z" level=info msg="RemovePodSandbox \"e6c5d9e7b015f5a650b35d655ec1b0edfbb0fd031d19f5721292e08a6ccdde2b\" returns successfully" Dec 13 13:29:21.351236 containerd[1498]: time="2024-12-13T13:29:21.351215270Z" level=info msg="StopPodSandbox for \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\"" Dec 13 13:29:21.351313 containerd[1498]: time="2024-12-13T13:29:21.351296869Z" level=info msg="TearDown network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\" successfully" Dec 13 13:29:21.351313 containerd[1498]: time="2024-12-13T13:29:21.351310175Z" level=info msg="StopPodSandbox for \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\" returns successfully" Dec 13 13:29:21.351584 containerd[1498]: time="2024-12-13T13:29:21.351559512Z" level=info msg="RemovePodSandbox for \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\"" Dec 13 13:29:21.351584 containerd[1498]: time="2024-12-13T13:29:21.351583709Z" level=info msg="Forcibly stopping sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\"" Dec 13 13:29:21.351693 containerd[1498]: time="2024-12-13T13:29:21.351653545Z" level=info msg="TearDown network for sandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\" successfully" Dec 13 13:29:21.358755 containerd[1498]: time="2024-12-13T13:29:21.358718947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.358865 containerd[1498]: time="2024-12-13T13:29:21.358767251Z" level=info msg="RemovePodSandbox \"979f14feb29824a3dc5687342ee1dacdafda85e55174a83decc7774548a3e297\" returns successfully" Dec 13 13:29:21.359241 containerd[1498]: time="2024-12-13T13:29:21.359191078Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:29:21.359407 containerd[1498]: time="2024-12-13T13:29:21.359296633Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:29:21.359407 containerd[1498]: time="2024-12-13T13:29:21.359308516Z" level=info msg="StopPodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:29:21.359623 containerd[1498]: time="2024-12-13T13:29:21.359595847Z" level=info msg="RemovePodSandbox for \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:29:21.359623 containerd[1498]: time="2024-12-13T13:29:21.359621347Z" level=info msg="Forcibly stopping sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\"" Dec 13 13:29:21.359728 containerd[1498]: time="2024-12-13T13:29:21.359686575Z" level=info msg="TearDown network for sandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" successfully" Dec 13 13:29:21.363547 containerd[1498]: time="2024-12-13T13:29:21.363491406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.363617 containerd[1498]: time="2024-12-13T13:29:21.363552936Z" level=info msg="RemovePodSandbox \"4b7f07d805932871146595e40a7aaf4280e277054df699aae53f261dfa95421e\" returns successfully" Dec 13 13:29:21.363797 containerd[1498]: time="2024-12-13T13:29:21.363772013Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:29:21.363896 containerd[1498]: time="2024-12-13T13:29:21.363873632Z" level=info msg="TearDown network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" successfully" Dec 13 13:29:21.363896 containerd[1498]: time="2024-12-13T13:29:21.363888711Z" level=info msg="StopPodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" returns successfully" Dec 13 13:29:21.364130 containerd[1498]: time="2024-12-13T13:29:21.364107728Z" level=info msg="RemovePodSandbox for \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:29:21.364130 containerd[1498]: time="2024-12-13T13:29:21.364127096Z" level=info msg="Forcibly stopping sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\"" Dec 13 13:29:21.364232 containerd[1498]: time="2024-12-13T13:29:21.364188045Z" level=info msg="TearDown network for sandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" successfully" Dec 13 13:29:21.367551 containerd[1498]: time="2024-12-13T13:29:21.367513291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.367551 containerd[1498]: time="2024-12-13T13:29:21.367558008Z" level=info msg="RemovePodSandbox \"775fe20e2333084ab28618cd4982306fc58b4c4dfa5dd99e52a64c48ac2937e9\" returns successfully" Dec 13 13:29:21.367851 containerd[1498]: time="2024-12-13T13:29:21.367816052Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" Dec 13 13:29:21.367927 containerd[1498]: time="2024-12-13T13:29:21.367908703Z" level=info msg="TearDown network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" successfully" Dec 13 13:29:21.367927 containerd[1498]: time="2024-12-13T13:29:21.367921187Z" level=info msg="StopPodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" returns successfully" Dec 13 13:29:21.369857 containerd[1498]: time="2024-12-13T13:29:21.368144813Z" level=info msg="RemovePodSandbox for \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" Dec 13 13:29:21.369857 containerd[1498]: time="2024-12-13T13:29:21.368167036Z" level=info msg="Forcibly stopping sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\"" Dec 13 13:29:21.369857 containerd[1498]: time="2024-12-13T13:29:21.368242033Z" level=info msg="TearDown network for sandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" successfully" Dec 13 13:29:21.371778 containerd[1498]: time="2024-12-13T13:29:21.371756087Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.371817 containerd[1498]: time="2024-12-13T13:29:21.371793189Z" level=info msg="RemovePodSandbox \"93e64fa12ec2325b5ac143026f4b24fd2ac61166c18a7f41736ba8deb59dee39\" returns successfully" Dec 13 13:29:21.372034 containerd[1498]: time="2024-12-13T13:29:21.372009070Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\"" Dec 13 13:29:21.372133 containerd[1498]: time="2024-12-13T13:29:21.372106852Z" level=info msg="TearDown network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" successfully" Dec 13 13:29:21.372133 containerd[1498]: time="2024-12-13T13:29:21.372124305Z" level=info msg="StopPodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" returns successfully" Dec 13 13:29:21.372338 containerd[1498]: time="2024-12-13T13:29:21.372308505Z" level=info msg="RemovePodSandbox for \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\"" Dec 13 13:29:21.372338 containerd[1498]: time="2024-12-13T13:29:21.372332943Z" level=info msg="Forcibly stopping sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\"" Dec 13 13:29:21.372448 containerd[1498]: time="2024-12-13T13:29:21.372411927Z" level=info msg="TearDown network for sandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" successfully" Dec 13 13:29:21.376069 containerd[1498]: time="2024-12-13T13:29:21.376038982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.376130 containerd[1498]: time="2024-12-13T13:29:21.376072757Z" level=info msg="RemovePodSandbox \"3fbd59380b22699393b74ce895db9c2c919b88e686f822ee5c62aa6ae2c20db0\" returns successfully" Dec 13 13:29:21.376428 containerd[1498]: time="2024-12-13T13:29:21.376279190Z" level=info msg="StopPodSandbox for \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\"" Dec 13 13:29:21.376428 containerd[1498]: time="2024-12-13T13:29:21.376362853Z" level=info msg="TearDown network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" successfully" Dec 13 13:29:21.376428 containerd[1498]: time="2024-12-13T13:29:21.376374526Z" level=info msg="StopPodSandbox for \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" returns successfully" Dec 13 13:29:21.376584 containerd[1498]: time="2024-12-13T13:29:21.376560549Z" level=info msg="RemovePodSandbox for \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\"" Dec 13 13:29:21.376584 containerd[1498]: time="2024-12-13T13:29:21.376581520Z" level=info msg="Forcibly stopping sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\"" Dec 13 13:29:21.376669 containerd[1498]: time="2024-12-13T13:29:21.376643400Z" level=info msg="TearDown network for sandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" successfully" Dec 13 13:29:21.379941 containerd[1498]: time="2024-12-13T13:29:21.379897558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.379941 containerd[1498]: time="2024-12-13T13:29:21.379935993Z" level=info msg="RemovePodSandbox \"42b15368a3972974a13da9738e60266115d2d472ae80bab1d155ea036d1797f0\" returns successfully" Dec 13 13:29:21.380172 containerd[1498]: time="2024-12-13T13:29:21.380152826Z" level=info msg="StopPodSandbox for \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\"" Dec 13 13:29:21.380238 containerd[1498]: time="2024-12-13T13:29:21.380225728Z" level=info msg="TearDown network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\" successfully" Dec 13 13:29:21.380261 containerd[1498]: time="2024-12-13T13:29:21.380236750Z" level=info msg="StopPodSandbox for \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\" returns successfully" Dec 13 13:29:21.380462 containerd[1498]: time="2024-12-13T13:29:21.380444995Z" level=info msg="RemovePodSandbox for \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\"" Dec 13 13:29:21.380500 containerd[1498]: time="2024-12-13T13:29:21.380464143Z" level=info msg="Forcibly stopping sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\"" Dec 13 13:29:21.380557 containerd[1498]: time="2024-12-13T13:29:21.380532166Z" level=info msg="TearDown network for sandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\" successfully" Dec 13 13:29:21.384160 containerd[1498]: time="2024-12-13T13:29:21.384133911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.384216 containerd[1498]: time="2024-12-13T13:29:21.384169341Z" level=info msg="RemovePodSandbox \"67756df0bcc347628f21440aa3157766c5e7d10effbb143fd9c42f0f569c6369\" returns successfully" Dec 13 13:29:21.394110 containerd[1498]: time="2024-12-13T13:29:21.394023532Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:29:21.394168 containerd[1498]: time="2024-12-13T13:29:21.394110332Z" level=info msg="TearDown network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" successfully" Dec 13 13:29:21.394168 containerd[1498]: time="2024-12-13T13:29:21.394119580Z" level=info msg="StopPodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" returns successfully" Dec 13 13:29:21.394380 containerd[1498]: time="2024-12-13T13:29:21.394358156Z" level=info msg="RemovePodSandbox for \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:29:21.394427 containerd[1498]: time="2024-12-13T13:29:21.394383835Z" level=info msg="Forcibly stopping sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\"" Dec 13 13:29:21.394494 containerd[1498]: time="2024-12-13T13:29:21.394459362Z" level=info msg="TearDown network for sandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" successfully" Dec 13 13:29:21.397983 containerd[1498]: time="2024-12-13T13:29:21.397958719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.398056 containerd[1498]: time="2024-12-13T13:29:21.398020348Z" level=info msg="RemovePodSandbox \"f3c72b6cd8b244c04f10949d32de7b4f9fb5f13b49f27394ad270b2d27b84ca3\" returns successfully" Dec 13 13:29:21.398285 containerd[1498]: time="2024-12-13T13:29:21.398264625Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" Dec 13 13:29:21.398395 containerd[1498]: time="2024-12-13T13:29:21.398376353Z" level=info msg="TearDown network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" successfully" Dec 13 13:29:21.398442 containerd[1498]: time="2024-12-13T13:29:21.398393336Z" level=info msg="StopPodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" returns successfully" Dec 13 13:29:21.398624 containerd[1498]: time="2024-12-13T13:29:21.398603426Z" level=info msg="RemovePodSandbox for \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" Dec 13 13:29:21.398624 containerd[1498]: time="2024-12-13T13:29:21.398623596Z" level=info msg="Forcibly stopping sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\"" Dec 13 13:29:21.398719 containerd[1498]: time="2024-12-13T13:29:21.398689835Z" level=info msg="TearDown network for sandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" successfully" Dec 13 13:29:21.402179 containerd[1498]: time="2024-12-13T13:29:21.402148822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.402245 containerd[1498]: time="2024-12-13T13:29:21.402185974Z" level=info msg="RemovePodSandbox \"f62ed592425827b96a5818a057bd7da73a44a047d6025699007b16edc8dacd5e\" returns successfully" Dec 13 13:29:21.402432 containerd[1498]: time="2024-12-13T13:29:21.402407727Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\"" Dec 13 13:29:21.402596 containerd[1498]: time="2024-12-13T13:29:21.402532600Z" level=info msg="TearDown network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" successfully" Dec 13 13:29:21.402596 containerd[1498]: time="2024-12-13T13:29:21.402553370Z" level=info msg="StopPodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" returns successfully" Dec 13 13:29:21.402766 containerd[1498]: time="2024-12-13T13:29:21.402740326Z" level=info msg="RemovePodSandbox for \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\"" Dec 13 13:29:21.402766 containerd[1498]: time="2024-12-13T13:29:21.402759513Z" level=info msg="Forcibly stopping sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\"" Dec 13 13:29:21.402895 containerd[1498]: time="2024-12-13T13:29:21.402861542Z" level=info msg="TearDown network for sandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" successfully" Dec 13 13:29:21.406310 containerd[1498]: time="2024-12-13T13:29:21.406274700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.406360 containerd[1498]: time="2024-12-13T13:29:21.406313235Z" level=info msg="RemovePodSandbox \"791ebbe1287fbb94c9015cf842176cf962d91a4c8cf3ed6c662a07ca2f0a4e08\" returns successfully" Dec 13 13:29:21.406604 containerd[1498]: time="2024-12-13T13:29:21.406581528Z" level=info msg="StopPodSandbox for \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\"" Dec 13 13:29:21.406692 containerd[1498]: time="2024-12-13T13:29:21.406673007Z" level=info msg="TearDown network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" successfully" Dec 13 13:29:21.406692 containerd[1498]: time="2024-12-13T13:29:21.406688467Z" level=info msg="StopPodSandbox for \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" returns successfully" Dec 13 13:29:21.407887 containerd[1498]: time="2024-12-13T13:29:21.406922533Z" level=info msg="RemovePodSandbox for \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\"" Dec 13 13:29:21.407887 containerd[1498]: time="2024-12-13T13:29:21.406946360Z" level=info msg="Forcibly stopping sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\"" Dec 13 13:29:21.407887 containerd[1498]: time="2024-12-13T13:29:21.407020996Z" level=info msg="TearDown network for sandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" successfully" Dec 13 13:29:21.410356 containerd[1498]: time="2024-12-13T13:29:21.410328607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.410406 containerd[1498]: time="2024-12-13T13:29:21.410372202Z" level=info msg="RemovePodSandbox \"467335aa5308b9bcee740ee8cc05bc1acca05f34e9eb3221e3973639b5124d80\" returns successfully" Dec 13 13:29:21.410610 containerd[1498]: time="2024-12-13T13:29:21.410587151Z" level=info msg="StopPodSandbox for \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\"" Dec 13 13:29:21.410678 containerd[1498]: time="2024-12-13T13:29:21.410659543Z" level=info msg="TearDown network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\" successfully" Dec 13 13:29:21.410678 containerd[1498]: time="2024-12-13T13:29:21.410670064Z" level=info msg="StopPodSandbox for \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\" returns successfully" Dec 13 13:29:21.410892 containerd[1498]: time="2024-12-13T13:29:21.410869171Z" level=info msg="RemovePodSandbox for \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\"" Dec 13 13:29:21.410892 containerd[1498]: time="2024-12-13T13:29:21.410888449Z" level=info msg="Forcibly stopping sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\"" Dec 13 13:29:21.410993 containerd[1498]: time="2024-12-13T13:29:21.410954468Z" level=info msg="TearDown network for sandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\" successfully" Dec 13 13:29:21.414273 containerd[1498]: time="2024-12-13T13:29:21.414239726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.414273 containerd[1498]: time="2024-12-13T13:29:21.414267992Z" level=info msg="RemovePodSandbox \"6dfdb6d98a11b9497821823674fc80fcf72e2171466363ba65b68caf7835a03a\" returns successfully" Dec 13 13:29:21.414514 containerd[1498]: time="2024-12-13T13:29:21.414495496Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:29:21.414592 containerd[1498]: time="2024-12-13T13:29:21.414576995Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:29:21.414592 containerd[1498]: time="2024-12-13T13:29:21.414588587Z" level=info msg="StopPodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:29:21.414811 containerd[1498]: time="2024-12-13T13:29:21.414777205Z" level=info msg="RemovePodSandbox for \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:29:21.414811 containerd[1498]: time="2024-12-13T13:29:21.414794168Z" level=info msg="Forcibly stopping sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\"" Dec 13 13:29:21.414924 containerd[1498]: time="2024-12-13T13:29:21.414864805Z" level=info msg="TearDown network for sandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" successfully" Dec 13 13:29:21.418136 containerd[1498]: time="2024-12-13T13:29:21.418111048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.418216 containerd[1498]: time="2024-12-13T13:29:21.418147479Z" level=info msg="RemovePodSandbox \"dc6a22312222f3ca619869b10ec59ca9909c08a34f6c5743fcdcf913a808e9fd\" returns successfully" Dec 13 13:29:21.418390 containerd[1498]: time="2024-12-13T13:29:21.418364983Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:29:21.418470 containerd[1498]: time="2024-12-13T13:29:21.418448706Z" level=info msg="TearDown network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" successfully" Dec 13 13:29:21.418470 containerd[1498]: time="2024-12-13T13:29:21.418463946Z" level=info msg="StopPodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" returns successfully" Dec 13 13:29:21.418741 containerd[1498]: time="2024-12-13T13:29:21.418720016Z" level=info msg="RemovePodSandbox for \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:29:21.418787 containerd[1498]: time="2024-12-13T13:29:21.418743291Z" level=info msg="Forcibly stopping sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\"" Dec 13 13:29:21.418854 containerd[1498]: time="2024-12-13T13:29:21.418814550Z" level=info msg="TearDown network for sandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" successfully" Dec 13 13:29:21.422132 containerd[1498]: time="2024-12-13T13:29:21.422096853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.422132 containerd[1498]: time="2024-12-13T13:29:21.422134486Z" level=info msg="RemovePodSandbox \"9798272879650cea9b6a07391905ef316e9c179222efb886b49286bd3bf64b4e\" returns successfully" Dec 13 13:29:21.422362 containerd[1498]: time="2024-12-13T13:29:21.422328314Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" Dec 13 13:29:21.422436 containerd[1498]: time="2024-12-13T13:29:21.422418119Z" level=info msg="TearDown network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" successfully" Dec 13 13:29:21.422473 containerd[1498]: time="2024-12-13T13:29:21.422432728Z" level=info msg="StopPodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" returns successfully" Dec 13 13:29:21.422664 containerd[1498]: time="2024-12-13T13:29:21.422638199Z" level=info msg="RemovePodSandbox for \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" Dec 13 13:29:21.422664 containerd[1498]: time="2024-12-13T13:29:21.422661514Z" level=info msg="Forcibly stopping sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\"" Dec 13 13:29:21.422771 containerd[1498]: time="2024-12-13T13:29:21.422735799Z" level=info msg="TearDown network for sandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" successfully" Dec 13 13:29:21.426277 containerd[1498]: time="2024-12-13T13:29:21.426246527Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.426365 containerd[1498]: time="2024-12-13T13:29:21.426285423Z" level=info msg="RemovePodSandbox \"03c600e6d0728e62b25f6ed67f75a8b294306c981d94ddbebfc34037a6b14765\" returns successfully" Dec 13 13:29:21.426614 containerd[1498]: time="2024-12-13T13:29:21.426584136Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\"" Dec 13 13:29:21.426707 containerd[1498]: time="2024-12-13T13:29:21.426688139Z" level=info msg="TearDown network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" successfully" Dec 13 13:29:21.426757 containerd[1498]: time="2024-12-13T13:29:21.426704781Z" level=info msg="StopPodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" returns successfully" Dec 13 13:29:21.426962 containerd[1498]: time="2024-12-13T13:29:21.426940631Z" level=info msg="RemovePodSandbox for \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\"" Dec 13 13:29:21.427021 containerd[1498]: time="2024-12-13T13:29:21.426963095Z" level=info msg="Forcibly stopping sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\"" Dec 13 13:29:21.427074 containerd[1498]: time="2024-12-13T13:29:21.427037109Z" level=info msg="TearDown network for sandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" successfully" Dec 13 13:29:21.430511 containerd[1498]: time="2024-12-13T13:29:21.430474314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.430592 containerd[1498]: time="2024-12-13T13:29:21.430515003Z" level=info msg="RemovePodSandbox \"05d9e43a43ac4813fa5ddf60dfed3b7cbfecba045f1fec94667f679ae3450482\" returns successfully" Dec 13 13:29:21.430773 containerd[1498]: time="2024-12-13T13:29:21.430733279Z" level=info msg="StopPodSandbox for \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\"" Dec 13 13:29:21.430859 containerd[1498]: time="2024-12-13T13:29:21.430839797Z" level=info msg="TearDown network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" successfully" Dec 13 13:29:21.430902 containerd[1498]: time="2024-12-13T13:29:21.430857571Z" level=info msg="StopPodSandbox for \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" returns successfully" Dec 13 13:29:21.431073 containerd[1498]: time="2024-12-13T13:29:21.431050879Z" level=info msg="RemovePodSandbox for \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\"" Dec 13 13:29:21.431119 containerd[1498]: time="2024-12-13T13:29:21.431076639Z" level=info msg="Forcibly stopping sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\"" Dec 13 13:29:21.431186 containerd[1498]: time="2024-12-13T13:29:21.431171414Z" level=info msg="TearDown network for sandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" successfully" Dec 13 13:29:21.435372 containerd[1498]: time="2024-12-13T13:29:21.435331698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.435372 containerd[1498]: time="2024-12-13T13:29:21.435376416Z" level=info msg="RemovePodSandbox \"71ea7f0038939e4b171e1a0e2012eaedcc767de193ce9d3dce493ec28eaae007\" returns successfully" Dec 13 13:29:21.435772 containerd[1498]: time="2024-12-13T13:29:21.435745185Z" level=info msg="StopPodSandbox for \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\"" Dec 13 13:29:21.435905 containerd[1498]: time="2024-12-13T13:29:21.435857304Z" level=info msg="TearDown network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\" successfully" Dec 13 13:29:21.435905 containerd[1498]: time="2024-12-13T13:29:21.435900849Z" level=info msg="StopPodSandbox for \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\" returns successfully" Dec 13 13:29:21.436140 containerd[1498]: time="2024-12-13T13:29:21.436120096Z" level=info msg="RemovePodSandbox for \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\"" Dec 13 13:29:21.436204 containerd[1498]: time="2024-12-13T13:29:21.436143332Z" level=info msg="Forcibly stopping sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\"" Dec 13 13:29:21.436255 containerd[1498]: time="2024-12-13T13:29:21.436213629Z" level=info msg="TearDown network for sandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\" successfully" Dec 13 13:29:21.439633 containerd[1498]: time="2024-12-13T13:29:21.439595836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.439633 containerd[1498]: time="2024-12-13T13:29:21.439631075Z" level=info msg="RemovePodSandbox \"c9740f69062ad30d9f3bde79f342863e67f34dd8032dd7afce45cef62a2b5883\" returns successfully" Dec 13 13:29:21.439915 containerd[1498]: time="2024-12-13T13:29:21.439889118Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:29:21.440369 containerd[1498]: time="2024-12-13T13:29:21.439985105Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:29:21.440369 containerd[1498]: time="2024-12-13T13:29:21.440019282Z" level=info msg="StopPodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:29:21.440462 containerd[1498]: time="2024-12-13T13:29:21.440436045Z" level=info msg="RemovePodSandbox for \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:29:21.440555 containerd[1498]: time="2024-12-13T13:29:21.440460834Z" level=info msg="Forcibly stopping sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\"" Dec 13 13:29:21.440657 containerd[1498]: time="2024-12-13T13:29:21.440614604Z" level=info msg="TearDown network for sandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" successfully" Dec 13 13:29:21.444148 containerd[1498]: time="2024-12-13T13:29:21.444124781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.444225 containerd[1498]: time="2024-12-13T13:29:21.444158607Z" level=info msg="RemovePodSandbox \"ae87101895f5ec8e8d482c3c7635986df9c23e88c541cd9337dc4c1b9ec7249a\" returns successfully" Dec 13 13:29:21.444440 containerd[1498]: time="2024-12-13T13:29:21.444410678Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:29:21.444507 containerd[1498]: time="2024-12-13T13:29:21.444492158Z" level=info msg="TearDown network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" successfully" Dec 13 13:29:21.444562 containerd[1498]: time="2024-12-13T13:29:21.444504682Z" level=info msg="StopPodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" returns successfully" Dec 13 13:29:21.444769 containerd[1498]: time="2024-12-13T13:29:21.444744810Z" level=info msg="RemovePodSandbox for \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:29:21.444841 containerd[1498]: time="2024-12-13T13:29:21.444770590Z" level=info msg="Forcibly stopping sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\"" Dec 13 13:29:21.444910 containerd[1498]: time="2024-12-13T13:29:21.444872509Z" level=info msg="TearDown network for sandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" successfully" Dec 13 13:29:21.448721 containerd[1498]: time="2024-12-13T13:29:21.448664807Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.448769 containerd[1498]: time="2024-12-13T13:29:21.448744492Z" level=info msg="RemovePodSandbox \"985521b800ee9e4cf4d47cb6e010fe6c8da1a824715e66a469cef1efbc3844d0\" returns successfully" Dec 13 13:29:21.449077 containerd[1498]: time="2024-12-13T13:29:21.449053625Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" Dec 13 13:29:21.449163 containerd[1498]: time="2024-12-13T13:29:21.449146807Z" level=info msg="TearDown network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" successfully" Dec 13 13:29:21.449196 containerd[1498]: time="2024-12-13T13:29:21.449160984Z" level=info msg="StopPodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" returns successfully" Dec 13 13:29:21.449427 containerd[1498]: time="2024-12-13T13:29:21.449406904Z" level=info msg="RemovePodSandbox for \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" Dec 13 13:29:21.449562 containerd[1498]: time="2024-12-13T13:29:21.449427444Z" level=info msg="Forcibly stopping sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\"" Dec 13 13:29:21.449562 containerd[1498]: time="2024-12-13T13:29:21.449494956Z" level=info msg="TearDown network for sandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" successfully" Dec 13 13:29:21.453330 containerd[1498]: time="2024-12-13T13:29:21.453295559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.453380 containerd[1498]: time="2024-12-13T13:29:21.453347000Z" level=info msg="RemovePodSandbox \"3c60343585f7563f8d92f9f028242cfed77327709ae5bc696f2b7011ba0df5c7\" returns successfully" Dec 13 13:29:21.453675 containerd[1498]: time="2024-12-13T13:29:21.453641895Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\"" Dec 13 13:29:21.453760 containerd[1498]: time="2024-12-13T13:29:21.453739094Z" level=info msg="TearDown network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" successfully" Dec 13 13:29:21.453760 containerd[1498]: time="2024-12-13T13:29:21.453754264Z" level=info msg="StopPodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" returns successfully" Dec 13 13:29:21.454047 containerd[1498]: time="2024-12-13T13:29:21.454019471Z" level=info msg="RemovePodSandbox for \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\"" Dec 13 13:29:21.454047 containerd[1498]: time="2024-12-13T13:29:21.454044921Z" level=info msg="Forcibly stopping sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\"" Dec 13 13:29:21.454163 containerd[1498]: time="2024-12-13T13:29:21.454124636Z" level=info msg="TearDown network for sandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" successfully" Dec 13 13:29:21.457747 containerd[1498]: time="2024-12-13T13:29:21.457714449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.457840 containerd[1498]: time="2024-12-13T13:29:21.457762592Z" level=info msg="RemovePodSandbox \"9cdc64baff2682d288b75e881389f1866d65efdaf8bfe5bce2ba8357ea190b3d\" returns successfully" Dec 13 13:29:21.458228 containerd[1498]: time="2024-12-13T13:29:21.458071115Z" level=info msg="StopPodSandbox for \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\"" Dec 13 13:29:21.458228 containerd[1498]: time="2024-12-13T13:29:21.458165058Z" level=info msg="TearDown network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" successfully" Dec 13 13:29:21.458228 containerd[1498]: time="2024-12-13T13:29:21.458175518Z" level=info msg="StopPodSandbox for \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" returns successfully" Dec 13 13:29:21.458509 containerd[1498]: time="2024-12-13T13:29:21.458424424Z" level=info msg="RemovePodSandbox for \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\"" Dec 13 13:29:21.458509 containerd[1498]: time="2024-12-13T13:29:21.458449082Z" level=info msg="Forcibly stopping sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\"" Dec 13 13:29:21.458597 containerd[1498]: time="2024-12-13T13:29:21.458561421Z" level=info msg="TearDown network for sandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" successfully" Dec 13 13:29:21.463548 containerd[1498]: time="2024-12-13T13:29:21.463510936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.463650 containerd[1498]: time="2024-12-13T13:29:21.463565973Z" level=info msg="RemovePodSandbox \"b746836ca808f3188e0898d7546986cd7f27c26abaf84ca644ce6ca05b8f0f76\" returns successfully" Dec 13 13:29:21.463970 containerd[1498]: time="2024-12-13T13:29:21.463926937Z" level=info msg="StopPodSandbox for \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\"" Dec 13 13:29:21.464047 containerd[1498]: time="2024-12-13T13:29:21.464025739Z" level=info msg="TearDown network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\" successfully" Dec 13 13:29:21.464078 containerd[1498]: time="2024-12-13T13:29:21.464043444Z" level=info msg="StopPodSandbox for \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\" returns successfully" Dec 13 13:29:21.464296 containerd[1498]: time="2024-12-13T13:29:21.464268153Z" level=info msg="RemovePodSandbox for \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\"" Dec 13 13:29:21.464348 containerd[1498]: time="2024-12-13T13:29:21.464294234Z" level=info msg="Forcibly stopping sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\"" Dec 13 13:29:21.464415 containerd[1498]: time="2024-12-13T13:29:21.464370632Z" level=info msg="TearDown network for sandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\" successfully" Dec 13 13:29:21.468445 containerd[1498]: time="2024-12-13T13:29:21.468414350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:21.468501 containerd[1498]: time="2024-12-13T13:29:21.468461482Z" level=info msg="RemovePodSandbox \"a73e8ff97e814cbad424afeac37dd53e85ebd33779b3ef6d77187328cbc5a1fa\" returns successfully" Dec 13 13:29:23.247075 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:40134.service - OpenSSH per-connection server daemon (10.0.0.1:40134). Dec 13 13:29:23.283677 sshd[6260]: Accepted publickey for core from 10.0.0.1 port 40134 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:23.285444 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:23.289382 systemd-logind[1484]: New session 18 of user core. Dec 13 13:29:23.297964 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:29:23.406811 sshd[6262]: Connection closed by 10.0.0.1 port 40134 Dec 13 13:29:23.407178 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:23.418018 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:40134.service: Deactivated successfully. Dec 13 13:29:23.419809 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:29:23.421294 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:29:23.428142 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:40142.service - OpenSSH per-connection server daemon (10.0.0.1:40142). Dec 13 13:29:23.429187 systemd-logind[1484]: Removed session 18. Dec 13 13:29:23.461026 sshd[6274]: Accepted publickey for core from 10.0.0.1 port 40142 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:23.463844 sshd-session[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:23.467673 systemd-logind[1484]: New session 19 of user core. Dec 13 13:29:23.481944 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:29:23.738781 sshd[6276]: Connection closed by 10.0.0.1 port 40142 Dec 13 13:29:23.739247 sshd-session[6274]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:23.753497 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:40142.service: Deactivated successfully. Dec 13 13:29:23.755107 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:29:23.756366 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:29:23.764052 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:40152.service - OpenSSH per-connection server daemon (10.0.0.1:40152). Dec 13 13:29:23.764873 systemd-logind[1484]: Removed session 19. Dec 13 13:29:23.799008 sshd[6287]: Accepted publickey for core from 10.0.0.1 port 40152 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:23.800359 sshd-session[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:23.804047 systemd-logind[1484]: New session 20 of user core. Dec 13 13:29:23.813941 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:29:25.423522 sshd[6289]: Connection closed by 10.0.0.1 port 40152 Dec 13 13:29:25.425165 sshd-session[6287]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:25.444543 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:40160.service - OpenSSH per-connection server daemon (10.0.0.1:40160). Dec 13 13:29:25.469117 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:40152.service: Deactivated successfully. Dec 13 13:29:25.471463 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:29:25.472805 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:29:25.473742 systemd-logind[1484]: Removed session 20. Dec 13 13:29:25.595195 sshd[6315]: Accepted publickey for core from 10.0.0.1 port 40160 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:25.596722 sshd-session[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:25.600680 systemd-logind[1484]: New session 21 of user core. Dec 13 13:29:25.614993 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:29:25.959222 sshd[6319]: Connection closed by 10.0.0.1 port 40160 Dec 13 13:29:25.964629 sshd-session[6315]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:25.981973 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:40160.service: Deactivated successfully. Dec 13 13:29:25.986854 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:29:25.991036 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:29:25.999628 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:40164.service - OpenSSH per-connection server daemon (10.0.0.1:40164). Dec 13 13:29:26.000880 systemd-logind[1484]: Removed session 21. Dec 13 13:29:26.038806 sshd[6330]: Accepted publickey for core from 10.0.0.1 port 40164 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:26.040404 sshd-session[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:26.044327 systemd-logind[1484]: New session 22 of user core. Dec 13 13:29:26.049965 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:29:26.157658 sshd[6332]: Connection closed by 10.0.0.1 port 40164 Dec 13 13:29:26.158045 sshd-session[6330]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:26.161821 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:40164.service: Deactivated successfully. Dec 13 13:29:26.164057 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:29:26.164843 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:29:26.165900 systemd-logind[1484]: Removed session 22. Dec 13 13:29:31.170478 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:59974.service - OpenSSH per-connection server daemon (10.0.0.1:59974). Dec 13 13:29:31.208120 sshd[6350]: Accepted publickey for core from 10.0.0.1 port 59974 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:31.209512 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:31.213381 systemd-logind[1484]: New session 23 of user core. Dec 13 13:29:31.218969 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:29:31.338622 sshd[6352]: Connection closed by 10.0.0.1 port 59974 Dec 13 13:29:31.339066 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:31.342955 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:59974.service: Deactivated successfully. Dec 13 13:29:31.344711 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:29:31.345334 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:29:31.346164 systemd-logind[1484]: Removed session 23. Dec 13 13:29:33.573209 kubelet[2617]: E1213 13:29:33.571579 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:36.215129 kubelet[2617]: E1213 13:29:36.215094 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:36.353217 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:40012.service - OpenSSH per-connection server daemon (10.0.0.1:40012). Dec 13 13:29:36.409267 sshd[6405]: Accepted publickey for core from 10.0.0.1 port 40012 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:36.411175 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:36.415242 systemd-logind[1484]: New session 24 of user core. Dec 13 13:29:36.424973 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:29:36.557484 sshd[6407]: Connection closed by 10.0.0.1 port 40012 Dec 13 13:29:36.557792 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:36.562238 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:40012.service: Deactivated successfully. Dec 13 13:29:36.564286 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:29:36.565117 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:29:36.566063 systemd-logind[1484]: Removed session 24. Dec 13 13:29:36.912313 kubelet[2617]: I1213 13:29:36.912156 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:29:41.215751 kubelet[2617]: E1213 13:29:41.215717 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:41.589092 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:40020.service - OpenSSH per-connection server daemon (10.0.0.1:40020). Dec 13 13:29:41.622908 sshd[6421]: Accepted publickey for core from 10.0.0.1 port 40020 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:41.624628 sshd-session[6421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:41.629113 systemd-logind[1484]: New session 25 of user core. Dec 13 13:29:41.641034 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:29:41.759124 sshd[6423]: Connection closed by 10.0.0.1 port 40020 Dec 13 13:29:41.759489 sshd-session[6421]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:41.763365 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:40020.service: Deactivated successfully. Dec 13 13:29:41.765650 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:29:41.766365 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:29:41.767237 systemd-logind[1484]: Removed session 25. Dec 13 13:29:46.771312 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:37862.service - OpenSSH per-connection server daemon (10.0.0.1:37862). Dec 13 13:29:46.808994 sshd[6443]: Accepted publickey for core from 10.0.0.1 port 37862 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:46.810515 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:46.814435 systemd-logind[1484]: New session 26 of user core. Dec 13 13:29:46.820966 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:29:46.943478 sshd[6445]: Connection closed by 10.0.0.1 port 37862 Dec 13 13:29:46.943919 sshd-session[6443]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:46.947616 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:37862.service: Deactivated successfully. Dec 13 13:29:46.949925 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:29:46.950698 systemd-logind[1484]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:29:46.952027 systemd-logind[1484]: Removed session 26.