Feb 13 15:50:49.866665 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:50:49.866691 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:50:49.866706 kernel: BIOS-provided physical RAM map: Feb 13 15:50:49.866714 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:50:49.866723 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:50:49.866731 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:50:49.866741 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 15:50:49.866750 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 15:50:49.866759 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 15:50:49.866770 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 15:50:49.866779 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:50:49.866787 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:50:49.866796 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:50:49.866804 kernel: NX (Execute Disable) protection: active Feb 13 15:50:49.866815 kernel: APIC: Static calls initialized Feb 13 15:50:49.866839 kernel: SMBIOS 2.8 present. Feb 13 15:50:49.866849 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 15:50:49.866858 kernel: Hypervisor detected: KVM Feb 13 15:50:49.866867 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:50:49.866877 kernel: kvm-clock: using sched offset of 2284468103 cycles Feb 13 15:50:49.866886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:50:49.866896 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:50:49.866906 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:50:49.866916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:50:49.866926 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 15:50:49.866939 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:50:49.866950 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:50:49.866959 kernel: Using GB pages for direct mapping Feb 13 15:50:49.866969 kernel: ACPI: Early table checksum verification disabled Feb 13 15:50:49.866980 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 15:50:49.866990 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:50:49.866999 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:50:49.867009 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:50:49.867019 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 15:50:49.867032 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:50:49.867042 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:50:49.867064 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:50:49.867074 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:50:49.867084 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 15:50:49.867094 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 15:50:49.867108 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 15:50:49.867121 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 15:50:49.867130 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 15:50:49.867140 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 15:50:49.867150 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 15:50:49.867160 kernel: No NUMA configuration found Feb 13 15:50:49.867170 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 15:50:49.867179 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 15:50:49.867192 kernel: Zone ranges: Feb 13 15:50:49.867202 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:50:49.867213 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 15:50:49.867234 kernel: Normal empty Feb 13 15:50:49.867244 kernel: Movable zone start for each node Feb 13 15:50:49.867254 kernel: Early memory node ranges Feb 13 15:50:49.867265 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:50:49.867281 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 15:50:49.867296 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 15:50:49.867313 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:50:49.867323 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:50:49.867334 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 15:50:49.867344 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:50:49.867354 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:50:49.867365 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:50:49.867375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:50:49.867386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:50:49.867396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:50:49.867410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:50:49.867420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:50:49.867431 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:50:49.867441 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:50:49.867451 kernel: TSC deadline timer available Feb 13 15:50:49.867462 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:50:49.867472 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:50:49.867482 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:50:49.867493 kernel: kvm-guest: setup PV sched yield Feb 13 15:50:49.867503 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 15:50:49.867517 kernel: Booting paravirtualized kernel on KVM Feb 13 15:50:49.867527 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:50:49.867538 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:50:49.867548 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:50:49.867559 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:50:49.867569 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:50:49.867579 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:50:49.867590 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:50:49.867602 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:50:49.867616 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:50:49.867626 kernel: random: crng init done Feb 13 15:50:49.867636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:50:49.867647 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:50:49.867657 kernel: Fallback order for Node 0: 0 Feb 13 15:50:49.867668 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 15:50:49.867695 kernel: Policy zone: DMA32 Feb 13 15:50:49.867720 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:50:49.867736 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 138948K reserved, 0K cma-reserved) Feb 13 15:50:49.867745 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:50:49.867755 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:50:49.867765 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:50:49.867775 kernel: Dynamic Preempt: voluntary Feb 13 15:50:49.867784 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:50:49.867798 kernel: rcu: RCU event tracing is enabled. Feb 13 15:50:49.867809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:50:49.867819 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:50:49.867831 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:50:49.867840 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:50:49.867850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:50:49.867859 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:50:49.867868 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:50:49.867878 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:50:49.867887 kernel: Console: colour VGA+ 80x25 Feb 13 15:50:49.867896 kernel: printk: console [ttyS0] enabled Feb 13 15:50:49.867905 kernel: ACPI: Core revision 20230628 Feb 13 15:50:49.867917 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:50:49.867927 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:50:49.867937 kernel: x2apic enabled Feb 13 15:50:49.867948 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:50:49.867958 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:50:49.867968 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:50:49.867979 kernel: kvm-guest: setup PV IPIs Feb 13 15:50:49.868001 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:50:49.868011 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:50:49.868022 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:50:49.868033 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:50:49.868057 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:50:49.868071 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:50:49.868082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:50:49.868093 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:50:49.868103 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:50:49.868114 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:50:49.868128 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:50:49.868138 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:50:49.868149 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:50:49.868160 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:50:49.868170 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:50:49.868182 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:50:49.868192 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:50:49.868203 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:50:49.868217 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:50:49.868236 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:50:49.868246 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:50:49.868257 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:50:49.868268 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:50:49.868278 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:50:49.868289 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:50:49.868299 kernel: landlock: Up and running. Feb 13 15:50:49.868310 kernel: SELinux: Initializing. Feb 13 15:50:49.868324 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:50:49.868334 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:50:49.868345 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:50:49.868356 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:50:49.868367 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:50:49.868378 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:50:49.868388 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:50:49.868399 kernel: ... version: 0 Feb 13 15:50:49.868409 kernel: ... bit width: 48 Feb 13 15:50:49.868422 kernel: ... generic registers: 6 Feb 13 15:50:49.868433 kernel: ... value mask: 0000ffffffffffff Feb 13 15:50:49.868443 kernel: ... max period: 00007fffffffffff Feb 13 15:50:49.868454 kernel: ... fixed-purpose events: 0 Feb 13 15:50:49.868464 kernel: ... event mask: 000000000000003f Feb 13 15:50:49.868475 kernel: signal: max sigframe size: 1776 Feb 13 15:50:49.868485 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:50:49.868496 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:50:49.868507 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:50:49.868521 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:50:49.868531 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:50:49.868541 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:50:49.868552 kernel: smpboot: Max logical packages: 1 Feb 13 15:50:49.868562 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:50:49.868573 kernel: devtmpfs: initialized Feb 13 15:50:49.868584 kernel: x86/mm: Memory block size: 128MB Feb 13 15:50:49.868594 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:50:49.868605 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:50:49.868618 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:50:49.868629 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:50:49.868640 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:50:49.868650 kernel: audit: type=2000 audit(1739461848.574:1): state=initialized audit_enabled=0 res=1 Feb 13 15:50:49.868661 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:50:49.868671 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:50:49.868682 kernel: cpuidle: using governor menu Feb 13 15:50:49.868692 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:50:49.868703 kernel: dca service started, version 1.12.1 Feb 13 15:50:49.868716 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 15:50:49.868727 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 15:50:49.868737 kernel: PCI: Using configuration type 1 for base access Feb 13 15:50:49.868748 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:50:49.868759 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:50:49.868770 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:50:49.868780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:50:49.868791 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:50:49.868801 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:50:49.868815 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:50:49.868825 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:50:49.868836 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:50:49.868846 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:50:49.868857 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:50:49.868867 kernel: ACPI: Interpreter enabled Feb 13 15:50:49.868877 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:50:49.868888 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:50:49.868899 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:50:49.868912 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:50:49.868923 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:50:49.868933 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:50:49.869168 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:50:49.869349 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:50:49.869505 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:50:49.869521 kernel: PCI host bridge to bus 0000:00 Feb 13 15:50:49.869690 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:50:49.869837 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:50:49.870038 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:50:49.870210 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 15:50:49.870364 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 15:50:49.870504 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 15:50:49.870643 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:50:49.870819 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:50:49.870990 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:50:49.871174 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 15:50:49.871348 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 15:50:49.871507 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 15:50:49.871661 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:50:49.871829 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:50:49.871985 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 15:50:49.872169 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 15:50:49.872330 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 15:50:49.872490 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:50:49.872642 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 15:50:49.872794 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 15:50:49.872947 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 15:50:49.873201 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:50:49.873366 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 15:50:49.873525 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 15:50:49.873680 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 15:50:49.873835 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 15:50:49.874000 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:50:49.874181 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:50:49.874367 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:50:49.874524 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 15:50:49.874669 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 15:50:49.874821 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:50:49.874966 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 15:50:49.874980 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:50:49.874996 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:50:49.875007 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:50:49.875018 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:50:49.875028 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:50:49.875039 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:50:49.875064 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:50:49.875075 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:50:49.875085 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:50:49.875096 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:50:49.875110 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:50:49.875121 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:50:49.875131 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:50:49.875142 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:50:49.875152 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:50:49.875163 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:50:49.875174 kernel: iommu: Default domain type: Translated Feb 13 15:50:49.875184 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:50:49.875195 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:50:49.875209 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:50:49.875219 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:50:49.875242 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 15:50:49.875396 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:50:49.875545 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:50:49.875695 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:50:49.875709 kernel: vgaarb: loaded Feb 13 15:50:49.875720 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:50:49.875735 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:50:49.875745 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:50:49.875756 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:50:49.875766 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:50:49.875777 kernel: pnp: PnP ACPI init Feb 13 15:50:49.875935 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 15:50:49.875951 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:50:49.875962 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:50:49.875976 kernel: NET: Registered PF_INET protocol family Feb 13 15:50:49.875987 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:50:49.875997 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:50:49.876008 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:50:49.876019 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:50:49.876029 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:50:49.876040 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:50:49.876086 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:50:49.876098 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:50:49.876115 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:50:49.876126 kernel: NET: Registered PF_XDP protocol family Feb 13 15:50:49.876278 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:50:49.876412 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:50:49.876543 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:50:49.876677 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 15:50:49.876816 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 15:50:49.876952 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 15:50:49.876970 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:50:49.876982 kernel: Initialise system trusted keyrings Feb 13 15:50:49.876992 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:50:49.877002 kernel: Key type asymmetric registered Feb 13 15:50:49.877013 kernel: Asymmetric key parser 'x509' registered Feb 13 15:50:49.877022 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:50:49.877033 kernel: io scheduler mq-deadline registered Feb 13 15:50:49.877058 kernel: io scheduler kyber registered Feb 13 15:50:49.877069 kernel: io scheduler bfq registered Feb 13 15:50:49.877079 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:50:49.877093 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:50:49.877104 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:50:49.877114 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:50:49.877125 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:50:49.877136 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:50:49.877147 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:50:49.877157 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:50:49.877168 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:50:49.877338 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:50:49.877359 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:50:49.877503 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:50:49.877649 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:50:49 UTC (1739461849) Feb 13 15:50:49.877793 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 15:50:49.877808 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:50:49.877819 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:50:49.877829 kernel: Segment Routing with IPv6 Feb 13 15:50:49.877844 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:50:49.877855 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:50:49.877865 kernel: Key type dns_resolver registered Feb 13 15:50:49.877876 kernel: IPI shorthand broadcast: enabled Feb 13 15:50:49.877887 kernel: sched_clock: Marking stable (546002502, 150675362)->(970493973, -273816109) Feb 13 15:50:49.877897 kernel: registered taskstats version 1 Feb 13 15:50:49.877908 kernel: Loading compiled-in X.509 certificates Feb 13 15:50:49.877919 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:50:49.877930 kernel: Key type .fscrypt registered Feb 13 15:50:49.877943 kernel: Key type fscrypt-provisioning registered Feb 13 15:50:49.877954 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:50:49.877965 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:50:49.877976 kernel: ima: No architecture policies found Feb 13 15:50:49.877986 kernel: clk: Disabling unused clocks Feb 13 15:50:49.877997 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:50:49.878008 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:50:49.878018 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:50:49.878029 kernel: Run /init as init process Feb 13 15:50:49.878109 kernel: with arguments: Feb 13 15:50:49.878121 kernel: /init Feb 13 15:50:49.878131 kernel: with environment: Feb 13 15:50:49.878141 kernel: HOME=/ Feb 13 15:50:49.878152 kernel: TERM=linux Feb 13 15:50:49.878162 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:50:49.878176 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:50:49.878189 systemd[1]: Detected virtualization kvm. Feb 13 15:50:49.878205 systemd[1]: Detected architecture x86-64. Feb 13 15:50:49.878216 systemd[1]: Running in initrd. Feb 13 15:50:49.878235 systemd[1]: No hostname configured, using default hostname. Feb 13 15:50:49.878247 systemd[1]: Hostname set to . Feb 13 15:50:49.878258 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:50:49.878270 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:50:49.878281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:50:49.878293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:50:49.878309 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:50:49.878335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:50:49.878349 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:50:49.878362 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:50:49.878375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:50:49.878391 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:50:49.878403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:50:49.878414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:50:49.878426 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:50:49.878438 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:50:49.878449 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:50:49.878461 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:50:49.878473 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:50:49.878487 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:50:49.878499 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:50:49.878510 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:50:49.878522 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:50:49.878534 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:50:49.878546 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:50:49.878558 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:50:49.878569 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:50:49.878581 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:50:49.878596 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:50:49.878608 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:50:49.878620 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:50:49.878631 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:50:49.878643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:50:49.878655 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:50:49.878666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:50:49.878678 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:50:49.878694 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:50:49.878730 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 15:50:49.878764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:50:49.878777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:50:49.878791 systemd-journald[192]: Journal started Feb 13 15:50:49.878819 systemd-journald[192]: Runtime Journal (/run/log/journal/46aefe3275a94f1897090188cbdd63bb) is 6.0M, max 48.3M, 42.3M free. Feb 13 15:50:49.865716 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:50:49.912615 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:50:49.912633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:50:49.912645 kernel: Bridge firewalling registered Feb 13 15:50:49.894495 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:50:49.912955 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:50:49.913686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:50:49.925258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:50:49.926404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:50:49.928063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:50:49.931679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:50:49.943100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:50:49.943761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:50:49.946606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:50:49.957360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:50:49.959644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:50:49.974382 dracut-cmdline[228]: dracut-dracut-053 Feb 13 15:50:49.977028 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:50:49.982001 systemd-resolved[220]: Positive Trust Anchors: Feb 13 15:50:49.982010 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:50:49.982062 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:50:49.984714 systemd-resolved[220]: Defaulting to hostname 'linux'. Feb 13 15:50:49.985828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:50:49.992520 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:50:50.052069 kernel: SCSI subsystem initialized Feb 13 15:50:50.062065 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:50:50.072069 kernel: iscsi: registered transport (tcp) Feb 13 15:50:50.092069 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:50:50.092093 kernel: QLogic iSCSI HBA Driver Feb 13 15:50:50.133210 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:50:50.149148 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:50:50.173071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:50:50.173113 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:50:50.173130 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:50:50.213071 kernel: raid6: avx2x4 gen() 30023 MB/s Feb 13 15:50:50.230061 kernel: raid6: avx2x2 gen() 30270 MB/s Feb 13 15:50:50.247129 kernel: raid6: avx2x1 gen() 25981 MB/s Feb 13 15:50:50.247149 kernel: raid6: using algorithm avx2x2 gen() 30270 MB/s Feb 13 15:50:50.265193 kernel: raid6: .... xor() 19596 MB/s, rmw enabled Feb 13 15:50:50.265230 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:50:50.286065 kernel: xor: automatically using best checksumming function avx Feb 13 15:50:50.428074 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:50:50.439406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:50:50.456180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:50:50.468189 systemd-udevd[411]: Using default interface naming scheme 'v255'. Feb 13 15:50:50.472826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:50:50.489231 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:50:50.500828 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Feb 13 15:50:50.529294 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:50:50.539199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:50:50.602889 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:50:50.610501 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:50:50.626448 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:50:50.630467 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:50:50.633729 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:50:50.636830 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:50:50.641078 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:50:50.671544 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:50:50.671688 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:50:50.671700 kernel: libata version 3.00 loaded. Feb 13 15:50:50.671710 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:50:50.671721 kernel: GPT:9289727 != 19775487 Feb 13 15:50:50.671731 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:50:50.671744 kernel: GPT:9289727 != 19775487 Feb 13 15:50:50.671754 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:50:50.671764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:50:50.648898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:50:50.659681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:50:50.659790 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:50:50.661630 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:50:50.665972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:50:50.666143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:50:50.668003 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:50:50.682013 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:50:50.682029 kernel: AES CTR mode by8 optimization enabled Feb 13 15:50:50.682296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:50:50.685319 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:50:50.711017 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:50:50.711037 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:50:50.712156 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:50:50.712314 kernel: scsi host0: ahci Feb 13 15:50:50.712473 kernel: scsi host1: ahci Feb 13 15:50:50.712617 kernel: scsi host2: ahci Feb 13 15:50:50.712764 kernel: scsi host3: ahci Feb 13 15:50:50.712906 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (465) Feb 13 15:50:50.712917 kernel: scsi host4: ahci Feb 13 15:50:50.713073 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) Feb 13 15:50:50.713085 kernel: scsi host5: ahci Feb 13 15:50:50.713242 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 15:50:50.713254 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 15:50:50.713264 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 15:50:50.713274 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 15:50:50.713284 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 15:50:50.713294 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 15:50:50.684973 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:50:50.716630 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:50:50.748332 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:50:50.762151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:50:50.767180 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:50:50.768447 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:50:50.775522 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:50:50.789145 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:50:50.801472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:50:50.811306 disk-uuid[554]: Primary Header is updated. Feb 13 15:50:50.811306 disk-uuid[554]: Secondary Entries is updated. Feb 13 15:50:50.811306 disk-uuid[554]: Secondary Header is updated. Feb 13 15:50:50.816058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:50:50.820066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:50:50.826221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:50:51.017068 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:50:51.017120 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:50:51.025067 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:50:51.025090 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:50:51.025100 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:50:51.026368 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:50:51.026381 kernel: ata3.00: applying bridge limits Feb 13 15:50:51.027383 kernel: ata3.00: configured for UDMA/100 Feb 13 15:50:51.028122 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:50:51.029125 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:50:51.078591 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:50:51.091595 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:50:51.091612 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:50:51.832078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:50:51.832580 disk-uuid[557]: The operation has completed successfully. Feb 13 15:50:51.866055 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:50:51.866209 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:50:51.887198 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:50:51.892260 sh[592]: Success Feb 13 15:50:51.904090 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:50:51.936808 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:50:51.961399 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:50:51.964348 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:50:51.976933 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:50:51.976960 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:50:51.976972 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:50:51.979707 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:50:51.979723 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:50:51.984317 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:50:51.984896 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:50:51.992189 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:50:51.994194 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:50:52.002914 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:50:52.002953 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:50:52.002965 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:50:52.006122 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:50:52.014748 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:50:52.016512 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:50:52.095064 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:50:52.117180 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:50:52.139031 systemd-networkd[770]: lo: Link UP Feb 13 15:50:52.139083 systemd-networkd[770]: lo: Gained carrier Feb 13 15:50:52.140629 systemd-networkd[770]: Enumeration completed Feb 13 15:50:52.140707 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:50:52.141014 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:50:52.141018 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:50:52.141999 systemd-networkd[770]: eth0: Link UP Feb 13 15:50:52.142002 systemd-networkd[770]: eth0: Gained carrier Feb 13 15:50:52.142009 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:50:52.143000 systemd[1]: Reached target network.target - Network. Feb 13 15:50:52.164091 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:50:52.411846 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:50:52.425186 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:50:52.477134 ignition[775]: Ignition 2.20.0 Feb 13 15:50:52.477144 ignition[775]: Stage: fetch-offline Feb 13 15:50:52.477192 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:50:52.477203 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:50:52.477305 ignition[775]: parsed url from cmdline: "" Feb 13 15:50:52.477309 ignition[775]: no config URL provided Feb 13 15:50:52.477315 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:50:52.477326 ignition[775]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:50:52.477355 ignition[775]: op(1): [started] loading QEMU firmware config module Feb 13 15:50:52.477360 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:50:52.485340 ignition[775]: op(1): [finished] loading QEMU firmware config module Feb 13 15:50:52.524128 ignition[775]: parsing config with SHA512: 214a33e6f3535ccd39626bbaec5599f140c28c012d742a7b14b9b1610d5f9536107bd6392d3c26311901aa74494c27e87f109e0a6e680ea1714d286ec35e8222 Feb 13 15:50:52.529970 unknown[775]: fetched base config from "system" Feb 13 15:50:52.529984 unknown[775]: fetched user config from "qemu" Feb 13 15:50:52.530438 ignition[775]: fetch-offline: fetch-offline passed Feb 13 15:50:52.530528 ignition[775]: Ignition finished successfully Feb 13 15:50:52.532338 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:50:52.533495 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:50:52.543214 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:50:52.554338 ignition[785]: Ignition 2.20.0 Feb 13 15:50:52.554347 ignition[785]: Stage: kargs Feb 13 15:50:52.554502 ignition[785]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:50:52.554513 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:50:52.555279 ignition[785]: kargs: kargs passed Feb 13 15:50:52.555318 ignition[785]: Ignition finished successfully Feb 13 15:50:52.561459 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:50:52.575240 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:50:52.585684 ignition[793]: Ignition 2.20.0 Feb 13 15:50:52.585695 ignition[793]: Stage: disks Feb 13 15:50:52.585861 ignition[793]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:50:52.585873 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:50:52.588583 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:50:52.586653 ignition[793]: disks: disks passed Feb 13 15:50:52.590502 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:50:52.586699 ignition[793]: Ignition finished successfully Feb 13 15:50:52.592389 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:50:52.594201 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:50:52.596186 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:50:52.597888 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:50:52.610208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:50:52.652199 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:50:52.819988 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:50:52.830196 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:50:52.926068 kernel: EXT4-fs (vda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:50:52.926131 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:50:52.927520 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:50:52.940128 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:50:52.941917 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:50:52.943281 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:50:52.969279 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Feb 13 15:50:52.969299 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:50:52.943331 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:50:52.976755 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:50:52.976774 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:50:52.976785 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:50:52.943358 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:50:52.949361 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:50:52.973077 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:50:52.978271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:50:53.007014 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:50:53.010740 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:50:53.014233 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:50:53.017284 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:50:53.092546 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:50:53.109119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:50:53.123877 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:50:53.121195 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:50:53.124578 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:50:53.155362 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:50:53.224781 ignition[928]: INFO : Ignition 2.20.0 Feb 13 15:50:53.224781 ignition[928]: INFO : Stage: mount Feb 13 15:50:53.226644 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:50:53.226644 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:50:53.226644 ignition[928]: INFO : mount: mount passed Feb 13 15:50:53.226644 ignition[928]: INFO : Ignition finished successfully Feb 13 15:50:53.231921 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:50:53.244109 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:50:53.250806 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:50:53.261062 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) Feb 13 15:50:53.263071 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:50:53.263098 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:50:53.263112 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:50:53.266063 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:50:53.267537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:50:53.309399 ignition[954]: INFO : Ignition 2.20.0 Feb 13 15:50:53.309399 ignition[954]: INFO : Stage: files Feb 13 15:50:53.311277 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:50:53.311277 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:50:53.311277 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:50:53.314995 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:50:53.314995 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:50:53.314995 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:50:53.314995 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:50:53.314995 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:50:53.314431 unknown[954]: wrote ssh authorized keys file for user: core Feb 13 15:50:53.323004 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:50:53.323004 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:50:53.351195 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:50:53.495909 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:50:53.498487 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:50:53.834285 systemd-networkd[770]: eth0: Gained IPv6LL Feb 13 15:50:53.987722 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:50:54.385053 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:50:54.385053 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:50:54.389230 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:50:54.409488 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:50:54.471987 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:50:54.471987 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:50:54.471987 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:50:54.471987 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:50:54.471987 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:50:54.471987 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:50:54.471987 ignition[954]: INFO : files: files passed Feb 13 15:50:54.471987 ignition[954]: INFO : Ignition finished successfully Feb 13 15:50:54.417708 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:50:54.482181 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:50:54.484489 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:50:54.486200 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:50:54.486301 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:50:54.493990 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:50:54.496635 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:50:54.498265 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:50:54.499765 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:50:54.499510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:50:54.501207 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:50:54.513185 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:50:54.535806 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:50:54.535928 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:50:54.538187 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:50:54.540211 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:50:54.542205 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:50:54.559192 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:50:54.572426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:50:54.621159 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:50:54.631569 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:50:54.632064 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:50:54.665587 ignition[1010]: INFO : Ignition 2.20.0 Feb 13 15:50:54.665587 ignition[1010]: INFO : Stage: umount Feb 13 15:50:54.665587 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:50:54.665587 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:50:54.665587 ignition[1010]: INFO : umount: umount passed Feb 13 15:50:54.665587 ignition[1010]: INFO : Ignition finished successfully Feb 13 15:50:54.632403 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:50:54.632774 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:50:54.632889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:50:54.633545 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:50:54.633867 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:50:54.634367 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:50:54.634686 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:50:54.635010 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:50:54.635354 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:50:54.635672 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:50:54.636011 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:50:54.636343 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:50:54.636658 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:50:54.636958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:50:54.637079 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:50:54.637642 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:50:54.637974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:50:54.638272 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:50:54.638395 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:50:54.638772 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:50:54.638878 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:50:54.639589 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:50:54.639695 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:50:54.640170 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:50:54.640404 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:50:54.644103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:50:54.644378 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:50:54.644693 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:50:54.645017 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:50:54.645126 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:50:54.645550 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:50:54.645633 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:50:54.646058 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:50:54.646172 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:50:54.646536 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:50:54.646636 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:50:54.647744 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:50:54.648898 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:50:54.649247 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:50:54.649345 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:50:54.649643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:50:54.649735 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:50:54.652729 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:50:54.652833 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:50:54.665640 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:50:54.665749 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:50:54.666424 systemd[1]: Stopped target network.target - Network. Feb 13 15:50:54.666491 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:50:54.666536 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:50:54.666834 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:50:54.666875 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:50:54.667409 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:50:54.667450 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:50:54.667727 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:50:54.667769 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:50:54.668184 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:50:54.668501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:50:54.676001 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:50:54.676551 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:50:54.676702 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:50:54.678186 systemd-networkd[770]: eth0: DHCPv6 lease lost Feb 13 15:50:54.679327 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:50:54.679387 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:50:54.681547 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:50:54.681695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:50:54.684735 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:50:54.684797 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:50:54.696192 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:50:54.697366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:50:54.697431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:50:54.699702 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:50:54.699751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:50:54.701657 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:50:54.701704 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:50:54.702926 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:50:54.732766 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:50:54.732909 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:50:54.736918 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:50:54.737119 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:50:54.739201 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:50:54.739249 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:50:54.740996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:50:54.741035 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:50:54.743023 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:50:54.743123 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:50:54.745398 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:50:54.745447 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:50:54.747034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:50:54.747103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:50:54.764200 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:50:54.765600 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:50:54.765660 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:50:54.767777 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:50:54.767827 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:50:54.770015 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:50:54.770080 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:50:54.772183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:50:54.772230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:50:54.774782 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:50:54.774889 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:50:55.181254 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:50:55.181386 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:50:55.183378 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:50:55.185008 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:50:55.185081 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:50:55.194242 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:50:55.201402 systemd[1]: Switching root. Feb 13 15:50:55.233227 systemd-journald[192]: Journal stopped Feb 13 15:50:56.315007 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 15:50:56.315117 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:50:56.315136 kernel: SELinux: policy capability open_perms=1 Feb 13 15:50:56.315148 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:50:56.315161 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:50:56.315172 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:50:56.315184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:50:56.315195 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:50:56.315210 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:50:56.315222 kernel: audit: type=1403 audit(1739461855.609:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:50:56.315234 systemd[1]: Successfully loaded SELinux policy in 37.825ms. Feb 13 15:50:56.315256 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.213ms. Feb 13 15:50:56.315269 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:50:56.315284 systemd[1]: Detected virtualization kvm. Feb 13 15:50:56.315296 systemd[1]: Detected architecture x86-64. Feb 13 15:50:56.315308 systemd[1]: Detected first boot. Feb 13 15:50:56.315320 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:50:56.315334 zram_generator::config[1055]: No configuration found. Feb 13 15:50:56.315348 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:50:56.315360 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:50:56.315372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:50:56.315386 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:50:56.315399 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:50:56.315412 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:50:56.315424 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:50:56.315438 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:50:56.315450 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:50:56.315463 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:50:56.315475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:50:56.315487 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:50:56.315499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:50:56.315511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:50:56.315523 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:50:56.315540 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:50:56.315556 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:50:56.315568 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:50:56.315580 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:50:56.315593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:50:56.315605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:50:56.315617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:50:56.315629 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:50:56.315647 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:50:56.315659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:50:56.315671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:50:56.315683 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:50:56.315695 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:50:56.315707 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:50:56.315720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:50:56.315732 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:50:56.315744 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:50:56.315758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:50:56.315770 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:50:56.315782 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:50:56.315794 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:50:56.315807 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:50:56.315820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:50:56.315832 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:50:56.315844 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:50:56.315856 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:50:56.315870 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:50:56.315883 systemd[1]: Reached target machines.target - Containers. Feb 13 15:50:56.315895 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:50:56.315907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:50:56.315919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:50:56.315932 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:50:56.315944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:50:56.315956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:50:56.315970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:50:56.315982 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:50:56.315994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:50:56.316007 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:50:56.316019 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:50:56.316031 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:50:56.316065 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:50:56.316078 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:50:56.316090 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:50:56.316106 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:50:56.316118 kernel: fuse: init (API version 7.39) Feb 13 15:50:56.316131 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:50:56.316143 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:50:56.316155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:50:56.316167 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:50:56.316179 systemd[1]: Stopped verity-setup.service. Feb 13 15:50:56.316191 kernel: loop: module loaded Feb 13 15:50:56.316203 kernel: ACPI: bus type drm_connector registered Feb 13 15:50:56.316217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:50:56.316229 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:50:56.316241 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:50:56.316270 systemd-journald[1125]: Collecting audit messages is disabled. Feb 13 15:50:56.316295 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:50:56.316308 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:50:56.316320 systemd-journald[1125]: Journal started Feb 13 15:50:56.316342 systemd-journald[1125]: Runtime Journal (/run/log/journal/46aefe3275a94f1897090188cbdd63bb) is 6.0M, max 48.3M, 42.3M free. Feb 13 15:50:56.095211 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:50:56.111895 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:50:56.112350 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:50:56.320065 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:50:56.321490 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:50:56.322765 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:50:56.324082 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:50:56.325572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:50:56.327194 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:50:56.327401 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:50:56.328934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:50:56.329173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:50:56.330637 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:50:56.330813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:50:56.332276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:50:56.332451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:50:56.334001 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:50:56.334339 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:50:56.335677 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:50:56.335848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:50:56.337218 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:50:56.338580 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:50:56.340077 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:50:56.354844 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:50:56.364162 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:50:56.366396 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:50:56.367500 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:50:56.367528 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:50:56.370036 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:50:56.372359 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:50:56.375283 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:50:56.376189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:50:56.380208 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:50:56.383157 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:50:56.384395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:50:56.385432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:50:56.386592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:50:56.389235 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:50:56.393686 systemd-journald[1125]: Time spent on flushing to /var/log/journal/46aefe3275a94f1897090188cbdd63bb is 21.483ms for 950 entries. Feb 13 15:50:56.393686 systemd-journald[1125]: System Journal (/var/log/journal/46aefe3275a94f1897090188cbdd63bb) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:50:56.425329 systemd-journald[1125]: Received client request to flush runtime journal. Feb 13 15:50:56.392204 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:50:56.396211 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:50:56.399174 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:50:56.400520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:50:56.402005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:50:56.410500 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:50:56.412605 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:50:56.423218 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:50:56.430595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:50:56.432119 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 15:50:56.433001 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:50:56.444519 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:50:56.446494 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:50:56.450810 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Feb 13 15:50:56.450826 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Feb 13 15:50:56.457002 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:50:56.458675 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:50:56.460023 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:50:56.468196 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:50:56.473902 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:50:56.474587 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:50:56.490082 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:50:56.495911 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:50:56.507263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:50:56.527753 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:50:56.528147 kernel: loop2: detected capacity change from 0 to 211296 Feb 13 15:50:56.528204 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:50:56.534364 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:50:56.560098 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 15:50:56.571084 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:50:56.582071 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 15:50:56.589874 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:50:56.591360 (sd-merge)[1196]: Merged extensions into '/usr'. Feb 13 15:50:56.595432 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:50:56.595524 systemd[1]: Reloading... Feb 13 15:50:56.653969 zram_generator::config[1221]: No configuration found. Feb 13 15:50:56.712483 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:50:56.775408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:50:56.824246 systemd[1]: Reloading finished in 228 ms. Feb 13 15:50:56.858258 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:50:56.859786 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:50:56.873357 systemd[1]: Starting ensure-sysext.service... Feb 13 15:50:56.875777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:50:56.883741 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:50:56.883830 systemd[1]: Reloading... Feb 13 15:50:56.901274 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:50:56.901574 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:50:56.902568 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:50:56.902875 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 15:50:56.902956 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 15:50:56.907212 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:50:56.907225 systemd-tmpfiles[1260]: Skipping /boot Feb 13 15:50:56.919404 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:50:56.919418 systemd-tmpfiles[1260]: Skipping /boot Feb 13 15:50:56.944089 zram_generator::config[1287]: No configuration found. Feb 13 15:50:57.054912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:50:57.104805 systemd[1]: Reloading finished in 220 ms. Feb 13 15:50:57.124487 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:50:57.135879 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:50:57.138381 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:50:57.140717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:50:57.145474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:50:57.148445 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:50:57.150381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:50:57.157003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:50:57.157593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:50:57.169698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:50:57.172905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:50:57.177882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:50:57.179140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:50:57.181390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:50:57.184680 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:50:57.185746 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:50:57.187432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:50:57.190216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:50:57.190776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:50:57.193445 augenrules[1355]: No rules Feb 13 15:50:57.194551 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:50:57.194813 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:50:57.196401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:50:57.196637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:50:57.198900 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:50:57.199103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:50:57.207824 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:50:57.208070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:50:57.211564 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Feb 13 15:50:57.214424 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:50:57.216382 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:50:57.221210 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:50:57.225042 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:50:57.227414 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:50:57.232393 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:50:57.244870 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:50:57.245945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:50:57.247811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:50:57.252325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:50:57.254401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:50:57.259367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:50:57.260522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:50:57.260662 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:50:57.260744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:50:57.261651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:50:57.263685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:50:57.263917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:50:57.266146 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:50:57.266343 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:50:57.267893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:50:57.268147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:50:57.274681 systemd[1]: Finished ensure-sysext.service. Feb 13 15:50:57.278921 augenrules[1375]: /sbin/augenrules: No change Feb 13 15:50:57.280461 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:50:57.280662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:50:57.289083 augenrules[1417]: No rules Feb 13 15:50:57.289836 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:50:57.290342 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:50:57.301335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:50:57.302478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:50:57.302533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:50:57.304582 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:50:57.307874 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:50:57.324795 systemd-resolved[1328]: Positive Trust Anchors: Feb 13 15:50:57.324814 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:50:57.324845 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:50:57.328126 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1406) Feb 13 15:50:57.333959 systemd-resolved[1328]: Defaulting to hostname 'linux'. Feb 13 15:50:57.335975 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:50:57.337455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:50:57.381378 systemd-networkd[1423]: lo: Link UP Feb 13 15:50:57.381393 systemd-networkd[1423]: lo: Gained carrier Feb 13 15:50:57.381618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:50:57.385683 systemd-networkd[1423]: Enumeration completed Feb 13 15:50:57.389254 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:50:57.390854 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:50:57.390869 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:50:57.390878 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:50:57.391882 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:50:57.391926 systemd-networkd[1423]: eth0: Link UP Feb 13 15:50:57.391930 systemd-networkd[1423]: eth0: Gained carrier Feb 13 15:50:57.391943 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:50:57.392856 systemd[1]: Reached target network.target - Network. Feb 13 15:50:57.396208 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:50:57.401871 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:50:57.402663 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:50:57.408061 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:50:57.410165 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:50:57.411416 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Feb 13 15:50:57.412308 systemd-timesyncd[1426]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:50:57.412357 systemd-timesyncd[1426]: Initial clock synchronization to Thu 2025-02-13 15:50:57.336688 UTC. Feb 13 15:50:57.414928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:50:57.431221 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:50:57.433769 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:50:57.433999 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:50:57.434229 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:50:57.434249 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:50:57.468182 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:50:57.470686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:50:57.537077 kernel: kvm_amd: TSC scaling supported Feb 13 15:50:57.537184 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:50:57.537250 kernel: kvm_amd: Nested Paging enabled Feb 13 15:50:57.537295 kernel: kvm_amd: LBR virtualization supported Feb 13 15:50:57.537318 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:50:57.537340 kernel: kvm_amd: Virtual GIF supported Feb 13 15:50:57.555068 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:50:57.567289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:50:57.591267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:50:57.606236 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:50:57.614661 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:50:57.643730 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:50:57.645438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:50:57.646698 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:50:57.647998 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:50:57.649396 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:50:57.650984 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:50:57.652327 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:50:57.653747 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:50:57.655161 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:50:57.655191 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:50:57.656207 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:50:57.658144 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:50:57.661001 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:50:57.676647 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:50:57.679203 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:50:57.680941 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:50:57.682249 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:50:57.683338 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:50:57.684413 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:50:57.684441 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:50:57.685431 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:50:57.687634 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:50:57.692138 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:50:57.694322 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:50:57.695267 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:50:57.696363 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:50:57.700491 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:50:57.700664 jq[1459]: false Feb 13 15:50:57.703357 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:50:57.712291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:50:57.715833 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:50:57.721595 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:50:57.723495 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:50:57.723925 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:50:57.725614 extend-filesystems[1460]: Found loop3 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found loop4 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found loop5 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found sr0 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda1 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda2 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda3 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found usr Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda4 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda6 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda7 Feb 13 15:50:57.725614 extend-filesystems[1460]: Found vda9 Feb 13 15:50:57.725614 extend-filesystems[1460]: Checking size of /dev/vda9 Feb 13 15:50:57.747976 extend-filesystems[1460]: Resized partition /dev/vda9 Feb 13 15:50:57.726405 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:50:57.726821 dbus-daemon[1458]: [system] SELinux support is enabled Feb 13 15:50:57.758314 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:50:57.731199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:50:57.758504 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:50:57.764743 update_engine[1472]: I20250213 15:50:57.754877 1472 main.cc:92] Flatcar Update Engine starting Feb 13 15:50:57.733957 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:50:57.765195 jq[1475]: true Feb 13 15:50:57.741483 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:50:57.755523 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:50:57.755748 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:50:57.756121 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:50:57.756320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:50:57.760260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:50:57.760489 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:50:57.766838 update_engine[1472]: I20250213 15:50:57.766680 1472 update_check_scheduler.cc:74] Next update check in 11m46s Feb 13 15:50:57.774075 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1410) Feb 13 15:50:57.782553 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:50:57.784067 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:50:57.796275 tar[1483]: linux-amd64/helm Feb 13 15:50:57.815149 jq[1484]: true Feb 13 15:50:57.815281 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:50:57.815281 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:50:57.815281 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:50:57.806139 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:50:57.827403 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Feb 13 15:50:57.808473 systemd-logind[1471]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:50:57.808494 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:50:57.809729 systemd-logind[1471]: New seat seat0. Feb 13 15:50:57.810911 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:50:57.810935 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:50:57.812827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:50:57.812843 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:50:57.823196 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:50:57.826242 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:50:57.828168 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:50:57.828400 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:50:57.847334 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:50:57.853738 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:50:57.855724 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:50:57.861866 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:50:57.973471 containerd[1485]: time="2025-02-13T15:50:57.973373140Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:50:58.001446 containerd[1485]: time="2025-02-13T15:50:58.001393835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003554 containerd[1485]: time="2025-02-13T15:50:58.003201172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003554 containerd[1485]: time="2025-02-13T15:50:58.003233327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:50:58.003554 containerd[1485]: time="2025-02-13T15:50:58.003249692Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:50:58.003554 containerd[1485]: time="2025-02-13T15:50:58.003407517Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:50:58.003554 containerd[1485]: time="2025-02-13T15:50:58.003422442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003554 containerd[1485]: time="2025-02-13T15:50:58.003503732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003554 containerd[1485]: time="2025-02-13T15:50:58.003515462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003782 containerd[1485]: time="2025-02-13T15:50:58.003692341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003782 containerd[1485]: time="2025-02-13T15:50:58.003706076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003782 containerd[1485]: time="2025-02-13T15:50:58.003718720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003782 containerd[1485]: time="2025-02-13T15:50:58.003728029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:50:58.003857 containerd[1485]: time="2025-02-13T15:50:58.003820503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:50:58.004103 containerd[1485]: time="2025-02-13T15:50:58.004082154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:50:58.004228 containerd[1485]: time="2025-02-13T15:50:58.004209482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:50:58.004228 containerd[1485]: time="2025-02-13T15:50:58.004225490Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:50:58.004349 containerd[1485]: time="2025-02-13T15:50:58.004330944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:50:58.004477 containerd[1485]: time="2025-02-13T15:50:58.004448218Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:50:58.009824 containerd[1485]: time="2025-02-13T15:50:58.009797873Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:50:58.009889 containerd[1485]: time="2025-02-13T15:50:58.009840865Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:50:58.009889 containerd[1485]: time="2025-02-13T15:50:58.009863701Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:50:58.009889 containerd[1485]: time="2025-02-13T15:50:58.009878954Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:50:58.009953 containerd[1485]: time="2025-02-13T15:50:58.009891220Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:50:58.010104 containerd[1485]: time="2025-02-13T15:50:58.010087035Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:50:58.010305 containerd[1485]: time="2025-02-13T15:50:58.010287782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:50:58.010415 containerd[1485]: time="2025-02-13T15:50:58.010399072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:50:58.010446 containerd[1485]: time="2025-02-13T15:50:58.010418404Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:50:58.010446 containerd[1485]: time="2025-02-13T15:50:58.010435524Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:50:58.010483 containerd[1485]: time="2025-02-13T15:50:58.010450191Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010483 containerd[1485]: time="2025-02-13T15:50:58.010465296Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010483 containerd[1485]: time="2025-02-13T15:50:58.010477503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010535 containerd[1485]: time="2025-02-13T15:50:58.010491288Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010535 containerd[1485]: time="2025-02-13T15:50:58.010505847Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010535 containerd[1485]: time="2025-02-13T15:50:58.010519442Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010535 containerd[1485]: time="2025-02-13T15:50:58.010532026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010613 containerd[1485]: time="2025-02-13T15:50:58.010542795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:50:58.010613 containerd[1485]: time="2025-02-13T15:50:58.010562166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010613 containerd[1485]: time="2025-02-13T15:50:58.010575177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010613 containerd[1485]: time="2025-02-13T15:50:58.010590460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010613 containerd[1485]: time="2025-02-13T15:50:58.010602528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010613 containerd[1485]: time="2025-02-13T15:50:58.010613752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010626406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010637005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010648497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010660684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010674012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010685286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010696104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010707347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010726 containerd[1485]: time="2025-02-13T15:50:58.010722889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:50:58.010886 containerd[1485]: time="2025-02-13T15:50:58.010740783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010886 containerd[1485]: time="2025-02-13T15:50:58.010753297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.010886 containerd[1485]: time="2025-02-13T15:50:58.010763707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:50:58.011580 containerd[1485]: time="2025-02-13T15:50:58.011547253Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:50:58.011636 containerd[1485]: time="2025-02-13T15:50:58.011586057Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:50:58.011636 containerd[1485]: time="2025-02-13T15:50:58.011601042Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:50:58.011636 containerd[1485]: time="2025-02-13T15:50:58.011615819Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:50:58.011636 containerd[1485]: time="2025-02-13T15:50:58.011628016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.011725 containerd[1485]: time="2025-02-13T15:50:58.011653760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:50:58.011725 containerd[1485]: time="2025-02-13T15:50:58.011666443Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:50:58.011725 containerd[1485]: time="2025-02-13T15:50:58.011678650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:50:58.012029 containerd[1485]: time="2025-02-13T15:50:58.011973726Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:50:58.012029 containerd[1485]: time="2025-02-13T15:50:58.012025173Z" level=info msg="Connect containerd service" Feb 13 15:50:58.012200 containerd[1485]: time="2025-02-13T15:50:58.012089026Z" level=info msg="using legacy CRI server" Feb 13 15:50:58.012200 containerd[1485]: time="2025-02-13T15:50:58.012096846Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:50:58.012238 containerd[1485]: time="2025-02-13T15:50:58.012209387Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:50:58.012824 containerd[1485]: time="2025-02-13T15:50:58.012794359Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:50:58.013077 containerd[1485]: time="2025-02-13T15:50:58.012978979Z" level=info msg="Start subscribing containerd event" Feb 13 15:50:58.013212 containerd[1485]: time="2025-02-13T15:50:58.013164810Z" level=info msg="Start recovering state" Feb 13 15:50:58.013212 containerd[1485]: time="2025-02-13T15:50:58.013183805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:50:58.013260 containerd[1485]: time="2025-02-13T15:50:58.013232940Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:50:58.013387 containerd[1485]: time="2025-02-13T15:50:58.013326753Z" level=info msg="Start event monitor" Feb 13 15:50:58.013458 containerd[1485]: time="2025-02-13T15:50:58.013420854Z" level=info msg="Start snapshots syncer" Feb 13 15:50:58.013560 containerd[1485]: time="2025-02-13T15:50:58.013505716Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:50:58.013560 containerd[1485]: time="2025-02-13T15:50:58.013517447Z" level=info msg="Start streaming server" Feb 13 15:50:58.015811 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:50:58.017790 containerd[1485]: time="2025-02-13T15:50:58.017769097Z" level=info msg="containerd successfully booted in 0.046775s" Feb 13 15:50:58.032449 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:50:58.055897 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:50:58.063305 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:50:58.071202 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:50:58.071532 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:50:58.078240 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:50:58.088849 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:50:58.091794 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:50:58.094319 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:50:58.095618 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:50:58.175965 tar[1483]: linux-amd64/LICENSE Feb 13 15:50:58.176099 tar[1483]: linux-amd64/README.md Feb 13 15:50:58.191432 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:50:58.321459 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:50:58.323832 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:39166.service - OpenSSH per-connection server daemon (10.0.0.1:39166). Feb 13 15:50:58.377019 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 39166 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:58.378725 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:58.387878 systemd-logind[1471]: New session 1 of user core. Feb 13 15:50:58.389180 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:50:58.397242 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:50:58.408939 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:50:58.412809 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:50:58.420630 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:50:58.534124 systemd[1555]: Queued start job for default target default.target. Feb 13 15:50:58.545237 systemd[1555]: Created slice app.slice - User Application Slice. Feb 13 15:50:58.545260 systemd[1555]: Reached target paths.target - Paths. Feb 13 15:50:58.545273 systemd[1555]: Reached target timers.target - Timers. Feb 13 15:50:58.546679 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:50:58.558012 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:50:58.558142 systemd[1555]: Reached target sockets.target - Sockets. Feb 13 15:50:58.558160 systemd[1555]: Reached target basic.target - Basic System. Feb 13 15:50:58.558194 systemd[1555]: Reached target default.target - Main User Target. Feb 13 15:50:58.558225 systemd[1555]: Startup finished in 131ms. Feb 13 15:50:58.558808 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:50:58.561446 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:50:58.571127 systemd-networkd[1423]: eth0: Gained IPv6LL Feb 13 15:50:58.573953 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:50:58.575728 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:50:58.600255 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:50:58.602549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:50:58.604618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:50:58.622549 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:58880.service - OpenSSH per-connection server daemon (10.0.0.1:58880). Feb 13 15:50:58.626577 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:50:58.627158 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:50:58.631673 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:50:58.639180 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:50:58.663426 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 58880 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:58.664967 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:58.668690 systemd-logind[1471]: New session 2 of user core. Feb 13 15:50:58.680191 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:50:58.733444 sshd[1585]: Connection closed by 10.0.0.1 port 58880 Feb 13 15:50:58.733863 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:58.746635 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:58880.service: Deactivated successfully. Feb 13 15:50:58.748300 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:50:58.749871 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:50:58.757326 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:58896.service - OpenSSH per-connection server daemon (10.0.0.1:58896). Feb 13 15:50:58.759720 systemd-logind[1471]: Removed session 2. Feb 13 15:50:58.794942 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 58896 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:50:58.796363 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:50:58.800531 systemd-logind[1471]: New session 3 of user core. Feb 13 15:50:58.814204 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:50:58.869329 sshd[1592]: Connection closed by 10.0.0.1 port 58896 Feb 13 15:50:58.869663 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Feb 13 15:50:58.873345 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:58896.service: Deactivated successfully. Feb 13 15:50:58.875329 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:50:58.875935 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:50:58.876828 systemd-logind[1471]: Removed session 3. Feb 13 15:50:59.206672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:50:59.208396 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:50:59.209741 systemd[1]: Startup finished in 678ms (kernel) + 5.918s (initrd) + 3.637s (userspace) = 10.233s. Feb 13 15:50:59.218864 agetty[1546]: failed to open credentials directory Feb 13 15:50:59.222191 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:50:59.230887 agetty[1545]: failed to open credentials directory Feb 13 15:50:59.746368 kubelet[1601]: E0213 15:50:59.746278 1601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:50:59.750858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:50:59.751116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:50:59.751432 systemd[1]: kubelet.service: Consumed 1.034s CPU time. Feb 13 15:51:08.830529 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:42148.service - OpenSSH per-connection server daemon (10.0.0.1:42148). Feb 13 15:51:08.870858 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 42148 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:51:08.872184 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:51:08.875437 systemd-logind[1471]: New session 4 of user core. Feb 13 15:51:08.891149 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:51:08.943799 sshd[1618]: Connection closed by 10.0.0.1 port 42148 Feb 13 15:51:08.944162 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Feb 13 15:51:08.954829 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:42148.service: Deactivated successfully. Feb 13 15:51:08.956626 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:51:08.957945 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:51:08.975363 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:42160.service - OpenSSH per-connection server daemon (10.0.0.1:42160). Feb 13 15:51:08.976483 systemd-logind[1471]: Removed session 4. Feb 13 15:51:09.012508 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 42160 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:51:09.013998 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:51:09.017792 systemd-logind[1471]: New session 5 of user core. Feb 13 15:51:09.026157 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:51:09.074452 sshd[1625]: Connection closed by 10.0.0.1 port 42160 Feb 13 15:51:09.074861 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Feb 13 15:51:09.087735 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:42160.service: Deactivated successfully. Feb 13 15:51:09.089390 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:51:09.090886 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:51:09.092235 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:42170.service - OpenSSH per-connection server daemon (10.0.0.1:42170). Feb 13 15:51:09.092933 systemd-logind[1471]: Removed session 5. Feb 13 15:51:09.146440 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 42170 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:51:09.148302 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:51:09.152636 systemd-logind[1471]: New session 6 of user core. Feb 13 15:51:09.168287 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:51:09.225795 sshd[1632]: Connection closed by 10.0.0.1 port 42170 Feb 13 15:51:09.226349 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Feb 13 15:51:09.239815 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:42170.service: Deactivated successfully. Feb 13 15:51:09.241607 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:51:09.242945 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:51:09.244145 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:42174.service - OpenSSH per-connection server daemon (10.0.0.1:42174). Feb 13 15:51:09.244947 systemd-logind[1471]: Removed session 6. Feb 13 15:51:09.285741 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 42174 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:51:09.287429 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:51:09.292293 systemd-logind[1471]: New session 7 of user core. Feb 13 15:51:09.306254 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:51:09.364577 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:51:09.364919 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:51:09.382421 sudo[1640]: pam_unix(sudo:session): session closed for user root Feb 13 15:51:09.384035 sshd[1639]: Connection closed by 10.0.0.1 port 42174 Feb 13 15:51:09.384522 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Feb 13 15:51:09.392734 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:42174.service: Deactivated successfully. Feb 13 15:51:09.395191 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:51:09.396809 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:51:09.404355 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:42190.service - OpenSSH per-connection server daemon (10.0.0.1:42190). Feb 13 15:51:09.405300 systemd-logind[1471]: Removed session 7. Feb 13 15:51:09.443548 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 42190 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:51:09.444967 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:51:09.449474 systemd-logind[1471]: New session 8 of user core. Feb 13 15:51:09.459197 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:51:09.513698 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:51:09.514031 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:51:09.517933 sudo[1649]: pam_unix(sudo:session): session closed for user root Feb 13 15:51:09.525766 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:51:09.526131 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:51:09.551456 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:51:09.580599 augenrules[1671]: No rules Feb 13 15:51:09.582462 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:51:09.582740 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:51:09.584227 sudo[1648]: pam_unix(sudo:session): session closed for user root Feb 13 15:51:09.585699 sshd[1647]: Connection closed by 10.0.0.1 port 42190 Feb 13 15:51:09.586036 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Feb 13 15:51:09.606936 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:42190.service: Deactivated successfully. Feb 13 15:51:09.608543 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:51:09.609879 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:51:09.611105 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:42206.service - OpenSSH per-connection server daemon (10.0.0.1:42206). Feb 13 15:51:09.611757 systemd-logind[1471]: Removed session 8. Feb 13 15:51:09.652657 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 42206 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:51:09.654504 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:51:09.658688 systemd-logind[1471]: New session 9 of user core. Feb 13 15:51:09.676244 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:51:09.731738 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:51:09.732276 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:51:09.984777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:51:09.995234 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:51:09.995417 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:51:09.996303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:51:10.138355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:10.143748 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:51:10.190419 kubelet[1716]: E0213 15:51:10.190358 1716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:51:10.197921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:51:10.198180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:51:10.256999 dockerd[1702]: time="2025-02-13T15:51:10.256859244Z" level=info msg="Starting up" Feb 13 15:51:10.578295 dockerd[1702]: time="2025-02-13T15:51:10.578173099Z" level=info msg="Loading containers: start." Feb 13 15:51:10.761066 kernel: Initializing XFRM netlink socket Feb 13 15:51:10.839755 systemd-networkd[1423]: docker0: Link UP Feb 13 15:51:10.878906 dockerd[1702]: time="2025-02-13T15:51:10.878863443Z" level=info msg="Loading containers: done." Feb 13 15:51:10.937820 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1904725694-merged.mount: Deactivated successfully. Feb 13 15:51:10.941719 dockerd[1702]: time="2025-02-13T15:51:10.941666571Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:51:10.941793 dockerd[1702]: time="2025-02-13T15:51:10.941776939Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:51:10.941938 dockerd[1702]: time="2025-02-13T15:51:10.941914485Z" level=info msg="Daemon has completed initialization" Feb 13 15:51:10.977437 dockerd[1702]: time="2025-02-13T15:51:10.977362510Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:51:10.977557 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:51:11.972743 containerd[1485]: time="2025-02-13T15:51:11.972694447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:51:12.612905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074170785.mount: Deactivated successfully. Feb 13 15:51:13.806220 containerd[1485]: time="2025-02-13T15:51:13.806163500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:13.806912 containerd[1485]: time="2025-02-13T15:51:13.806880922Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:51:13.808092 containerd[1485]: time="2025-02-13T15:51:13.808064706Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:13.814448 containerd[1485]: time="2025-02-13T15:51:13.814386454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:13.815545 containerd[1485]: time="2025-02-13T15:51:13.815512701Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 1.842775652s" Feb 13 15:51:13.815592 containerd[1485]: time="2025-02-13T15:51:13.815548813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:51:13.844872 containerd[1485]: time="2025-02-13T15:51:13.844836493Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:51:15.718127 containerd[1485]: time="2025-02-13T15:51:15.718071797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:15.718814 containerd[1485]: time="2025-02-13T15:51:15.718760768Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:51:15.719978 containerd[1485]: time="2025-02-13T15:51:15.719949028Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:15.722642 containerd[1485]: time="2025-02-13T15:51:15.722594252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:15.723735 containerd[1485]: time="2025-02-13T15:51:15.723704472Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 1.878821335s" Feb 13 15:51:15.723786 containerd[1485]: time="2025-02-13T15:51:15.723733408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:51:15.752538 containerd[1485]: time="2025-02-13T15:51:15.752494351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:51:17.253666 containerd[1485]: time="2025-02-13T15:51:17.253591505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:17.254393 containerd[1485]: time="2025-02-13T15:51:17.254344856Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:51:17.255574 containerd[1485]: time="2025-02-13T15:51:17.255533769Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:17.258084 containerd[1485]: time="2025-02-13T15:51:17.258032981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:17.259129 containerd[1485]: time="2025-02-13T15:51:17.259095722Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.506555067s" Feb 13 15:51:17.259168 containerd[1485]: time="2025-02-13T15:51:17.259131172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:51:17.287845 containerd[1485]: time="2025-02-13T15:51:17.287799854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:51:19.099865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808020532.mount: Deactivated successfully. Feb 13 15:51:19.453228 containerd[1485]: time="2025-02-13T15:51:19.453070283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:19.453730 containerd[1485]: time="2025-02-13T15:51:19.453667672Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:51:19.454872 containerd[1485]: time="2025-02-13T15:51:19.454841514Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:19.456670 containerd[1485]: time="2025-02-13T15:51:19.456641052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:19.457289 containerd[1485]: time="2025-02-13T15:51:19.457257756Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.169418395s" Feb 13 15:51:19.457289 containerd[1485]: time="2025-02-13T15:51:19.457285732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:51:19.479399 containerd[1485]: time="2025-02-13T15:51:19.479343805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:51:20.448469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:51:20.458218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:51:20.609124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:20.614403 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:51:21.209599 kubelet[2028]: E0213 15:51:21.209535 2028 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:51:21.214368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:51:21.214578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:51:21.889733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680292.mount: Deactivated successfully. Feb 13 15:51:24.029970 containerd[1485]: time="2025-02-13T15:51:24.029899306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:24.030599 containerd[1485]: time="2025-02-13T15:51:24.030536735Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:51:24.031650 containerd[1485]: time="2025-02-13T15:51:24.031622684Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:24.034408 containerd[1485]: time="2025-02-13T15:51:24.034382152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:24.035437 containerd[1485]: time="2025-02-13T15:51:24.035410799Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 4.556033437s" Feb 13 15:51:24.035437 containerd[1485]: time="2025-02-13T15:51:24.035434006Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:51:24.063999 containerd[1485]: time="2025-02-13T15:51:24.063944559Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:51:24.574924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317486328.mount: Deactivated successfully. Feb 13 15:51:24.580929 containerd[1485]: time="2025-02-13T15:51:24.580886823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:24.581676 containerd[1485]: time="2025-02-13T15:51:24.581645182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:51:24.582713 containerd[1485]: time="2025-02-13T15:51:24.582692059Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:24.584816 containerd[1485]: time="2025-02-13T15:51:24.584790370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:24.585458 containerd[1485]: time="2025-02-13T15:51:24.585427388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 521.445129ms" Feb 13 15:51:24.585458 containerd[1485]: time="2025-02-13T15:51:24.585451276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:51:24.646400 containerd[1485]: time="2025-02-13T15:51:24.646354502Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:51:25.350233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4147759990.mount: Deactivated successfully. Feb 13 15:51:27.942815 containerd[1485]: time="2025-02-13T15:51:27.942748813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:27.943621 containerd[1485]: time="2025-02-13T15:51:27.943586280Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:51:27.944957 containerd[1485]: time="2025-02-13T15:51:27.944906617Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:27.947734 containerd[1485]: time="2025-02-13T15:51:27.947702011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:51:27.948879 containerd[1485]: time="2025-02-13T15:51:27.948825858Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.302430341s" Feb 13 15:51:27.948879 containerd[1485]: time="2025-02-13T15:51:27.948874110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:51:30.871485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:30.884263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:51:30.903329 systemd[1]: Reloading requested from client PID 2224 ('systemctl') (unit session-9.scope)... Feb 13 15:51:30.903344 systemd[1]: Reloading... Feb 13 15:51:30.981078 zram_generator::config[2263]: No configuration found. Feb 13 15:51:31.152588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:51:31.230262 systemd[1]: Reloading finished in 326 ms. Feb 13 15:51:31.281826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:31.284820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:51:31.286849 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:51:31.287104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:31.288794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:51:31.427930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:31.432497 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:51:31.469979 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:51:31.469979 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:51:31.469979 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:51:31.470425 kubelet[2313]: I0213 15:51:31.470029 2313 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:51:31.741131 kubelet[2313]: I0213 15:51:31.741014 2313 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:51:31.741131 kubelet[2313]: I0213 15:51:31.741054 2313 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:51:31.741340 kubelet[2313]: I0213 15:51:31.741312 2313 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:51:31.756747 kubelet[2313]: E0213 15:51:31.756717 2313 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.759698 kubelet[2313]: I0213 15:51:31.759676 2313 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:51:31.773787 kubelet[2313]: I0213 15:51:31.773746 2313 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:51:31.775683 kubelet[2313]: I0213 15:51:31.775652 2313 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:51:31.775822 kubelet[2313]: I0213 15:51:31.775801 2313 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:51:31.775918 kubelet[2313]: I0213 15:51:31.775824 2313 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:51:31.775918 kubelet[2313]: I0213 15:51:31.775833 2313 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:51:31.775970 kubelet[2313]: I0213 15:51:31.775939 2313 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:51:31.776058 kubelet[2313]: I0213 15:51:31.776023 2313 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:51:31.776058 kubelet[2313]: I0213 15:51:31.776037 2313 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:51:31.776127 kubelet[2313]: I0213 15:51:31.776080 2313 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:51:31.776127 kubelet[2313]: I0213 15:51:31.776094 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:51:31.777101 kubelet[2313]: I0213 15:51:31.777075 2313 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:51:31.777546 kubelet[2313]: W0213 15:51:31.777436 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.777546 kubelet[2313]: E0213 15:51:31.777500 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.777546 kubelet[2313]: W0213 15:51:31.777504 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.777546 kubelet[2313]: E0213 15:51:31.777543 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.779305 kubelet[2313]: I0213 15:51:31.779287 2313 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:51:31.780026 kubelet[2313]: W0213 15:51:31.780009 2313 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:51:31.780689 kubelet[2313]: I0213 15:51:31.780549 2313 server.go:1256] "Started kubelet" Feb 13 15:51:31.780689 kubelet[2313]: I0213 15:51:31.780628 2313 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:51:31.780990 kubelet[2313]: I0213 15:51:31.780966 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:51:31.781593 kubelet[2313]: I0213 15:51:31.781315 2313 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:51:31.781593 kubelet[2313]: I0213 15:51:31.781316 2313 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:51:31.781593 kubelet[2313]: I0213 15:51:31.781473 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:51:31.783868 kubelet[2313]: I0213 15:51:31.783568 2313 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:51:31.783868 kubelet[2313]: E0213 15:51:31.783590 2313 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:51:31.783868 kubelet[2313]: I0213 15:51:31.783634 2313 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:51:31.783868 kubelet[2313]: I0213 15:51:31.783684 2313 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:51:31.784014 kubelet[2313]: W0213 15:51:31.783930 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.784014 kubelet[2313]: E0213 15:51:31.783963 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.785035 kubelet[2313]: E0213 15:51:31.785012 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Feb 13 15:51:31.785584 kubelet[2313]: E0213 15:51:31.785564 2313 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:51:31.786322 kubelet[2313]: I0213 15:51:31.786304 2313 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:51:31.786322 kubelet[2313]: I0213 15:51:31.786318 2313 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:51:31.786409 kubelet[2313]: I0213 15:51:31.786382 2313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:51:31.786666 kubelet[2313]: E0213 15:51:31.786638 2313 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cf5fdfe85099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:51:31.780530329 +0000 UTC m=+0.344124974,LastTimestamp:2025-02-13 15:51:31.780530329 +0000 UTC m=+0.344124974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:51:31.798935 kubelet[2313]: I0213 15:51:31.798908 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:51:31.800097 kubelet[2313]: I0213 15:51:31.800012 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:51:31.800097 kubelet[2313]: I0213 15:51:31.800034 2313 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:51:31.800212 kubelet[2313]: I0213 15:51:31.800200 2313 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:51:31.800686 kubelet[2313]: E0213 15:51:31.800659 2313 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:51:31.801586 kubelet[2313]: W0213 15:51:31.801484 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.801586 kubelet[2313]: E0213 15:51:31.801533 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:31.801652 kubelet[2313]: I0213 15:51:31.801619 2313 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:51:31.801652 kubelet[2313]: I0213 15:51:31.801631 2313 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:51:31.801699 kubelet[2313]: I0213 15:51:31.801655 2313 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:51:31.884743 kubelet[2313]: I0213 15:51:31.884703 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:51:31.885114 kubelet[2313]: E0213 15:51:31.885090 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 15:51:31.901190 kubelet[2313]: E0213 15:51:31.901166 2313 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:51:31.985660 kubelet[2313]: E0213 15:51:31.985628 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Feb 13 15:51:32.086954 kubelet[2313]: I0213 15:51:32.086867 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:51:32.087154 kubelet[2313]: E0213 15:51:32.087134 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 15:51:32.102283 kubelet[2313]: E0213 15:51:32.102255 2313 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:51:32.387248 kubelet[2313]: E0213 15:51:32.387103 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Feb 13 15:51:32.488736 kubelet[2313]: I0213 15:51:32.488691 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:51:32.489142 kubelet[2313]: E0213 15:51:32.489014 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 15:51:32.503177 kubelet[2313]: E0213 15:51:32.503145 2313 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:51:32.869379 kubelet[2313]: I0213 15:51:32.869256 2313 policy_none.go:49] "None policy: Start" Feb 13 15:51:32.870010 kubelet[2313]: I0213 15:51:32.869992 2313 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:51:32.870100 kubelet[2313]: I0213 15:51:32.870020 2313 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:51:32.917679 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:51:32.934384 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:51:32.937341 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:51:32.941386 kubelet[2313]: W0213 15:51:32.941318 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:32.941386 kubelet[2313]: E0213 15:51:32.941384 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:32.948001 kubelet[2313]: I0213 15:51:32.947969 2313 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:51:32.948348 kubelet[2313]: I0213 15:51:32.948258 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:51:32.949120 kubelet[2313]: E0213 15:51:32.949062 2313 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:51:33.037234 kubelet[2313]: W0213 15:51:33.037158 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:33.037234 kubelet[2313]: E0213 15:51:33.037235 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:33.132261 kubelet[2313]: W0213 15:51:33.132121 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:33.132261 kubelet[2313]: E0213 15:51:33.132183 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:33.187930 kubelet[2313]: E0213 15:51:33.187888 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Feb 13 15:51:33.191555 kubelet[2313]: E0213 15:51:33.191516 2313 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cf5fdfe85099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:51:31.780530329 +0000 UTC m=+0.344124974,LastTimestamp:2025-02-13 15:51:31.780530329 +0000 UTC m=+0.344124974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:51:33.291068 kubelet[2313]: I0213 15:51:33.291024 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:51:33.291482 kubelet[2313]: E0213 15:51:33.291455 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 15:51:33.303641 kubelet[2313]: I0213 15:51:33.303606 2313 topology_manager.go:215] "Topology Admit Handler" podUID="89c52773ed476010b52954bb0414086a" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:51:33.304620 kubelet[2313]: I0213 15:51:33.304597 2313 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:51:33.305382 kubelet[2313]: I0213 15:51:33.305352 2313 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:51:33.310768 systemd[1]: Created slice kubepods-burstable-pod89c52773ed476010b52954bb0414086a.slice - libcontainer container kubepods-burstable-pod89c52773ed476010b52954bb0414086a.slice. Feb 13 15:51:33.330152 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:51:33.340359 kubelet[2313]: W0213 15:51:33.340319 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:33.340359 kubelet[2313]: E0213 15:51:33.340361 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:33.342920 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:51:33.391785 kubelet[2313]: I0213 15:51:33.391664 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89c52773ed476010b52954bb0414086a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89c52773ed476010b52954bb0414086a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:51:33.391785 kubelet[2313]: I0213 15:51:33.391701 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:33.391785 kubelet[2313]: I0213 15:51:33.391722 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:51:33.391785 kubelet[2313]: I0213 15:51:33.391738 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89c52773ed476010b52954bb0414086a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89c52773ed476010b52954bb0414086a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:51:33.391785 kubelet[2313]: I0213 15:51:33.391756 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89c52773ed476010b52954bb0414086a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89c52773ed476010b52954bb0414086a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:51:33.391981 kubelet[2313]: I0213 15:51:33.391773 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:33.391981 kubelet[2313]: I0213 15:51:33.391804 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:33.391981 kubelet[2313]: I0213 15:51:33.391833 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:33.391981 kubelet[2313]: I0213 15:51:33.391893 2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:33.627790 kubelet[2313]: E0213 15:51:33.627730 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:33.628689 containerd[1485]: time="2025-02-13T15:51:33.628652611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89c52773ed476010b52954bb0414086a,Namespace:kube-system,Attempt:0,}" Feb 13 15:51:33.642097 kubelet[2313]: E0213 15:51:33.641984 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:33.642401 containerd[1485]: time="2025-02-13T15:51:33.642376901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:51:33.644701 kubelet[2313]: E0213 15:51:33.644662 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:33.645117 containerd[1485]: time="2025-02-13T15:51:33.645075462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:51:33.804722 kubelet[2313]: E0213 15:51:33.804682 2313 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:34.584573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845242331.mount: Deactivated successfully. Feb 13 15:51:34.631574 kubelet[2313]: W0213 15:51:34.631527 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:34.631574 kubelet[2313]: E0213 15:51:34.631573 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:34.655763 containerd[1485]: time="2025-02-13T15:51:34.655698989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:51:34.658653 containerd[1485]: time="2025-02-13T15:51:34.658613895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:51:34.659614 containerd[1485]: time="2025-02-13T15:51:34.659579623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:51:34.661473 containerd[1485]: time="2025-02-13T15:51:34.661435156Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:51:34.662266 containerd[1485]: time="2025-02-13T15:51:34.662218481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:51:34.663450 containerd[1485]: time="2025-02-13T15:51:34.663420874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:51:34.664380 containerd[1485]: time="2025-02-13T15:51:34.664346297Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:51:34.665360 containerd[1485]: time="2025-02-13T15:51:34.665328264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:51:34.666157 containerd[1485]: time="2025-02-13T15:51:34.666128962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.037377154s" Feb 13 15:51:34.668668 containerd[1485]: time="2025-02-13T15:51:34.668632264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.026191342s" Feb 13 15:51:34.673725 containerd[1485]: time="2025-02-13T15:51:34.673694563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.028511048s" Feb 13 15:51:34.789926 kubelet[2313]: E0213 15:51:34.789849 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="3.2s" Feb 13 15:51:34.794297 containerd[1485]: time="2025-02-13T15:51:34.793657407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:51:34.794297 containerd[1485]: time="2025-02-13T15:51:34.793828930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:51:34.794297 containerd[1485]: time="2025-02-13T15:51:34.793870718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:34.794297 containerd[1485]: time="2025-02-13T15:51:34.793990735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:34.794485 kubelet[2313]: W0213 15:51:34.794245 2313 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:34.794485 kubelet[2313]: E0213 15:51:34.794284 2313 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.795190412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.795230288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.795244174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.795450712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.792945517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.795081077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.795102497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:34.796496 containerd[1485]: time="2025-02-13T15:51:34.795208157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:34.816204 systemd[1]: Started cri-containerd-32a5c85180ec9b21990e89cb9eee8cc299a0c977f53f32c0fe0ee8d966ba4767.scope - libcontainer container 32a5c85180ec9b21990e89cb9eee8cc299a0c977f53f32c0fe0ee8d966ba4767. Feb 13 15:51:34.820145 systemd[1]: Started cri-containerd-48820988b2750fd104decb7bb391de5c1e2cfcce5ecfc733aa52940f9aa0fe98.scope - libcontainer container 48820988b2750fd104decb7bb391de5c1e2cfcce5ecfc733aa52940f9aa0fe98. Feb 13 15:51:34.822133 systemd[1]: Started cri-containerd-f4a8a4ad0a9a7f940b8fc15336ee404d8feffbc5f77275c284a25f3f2c821433.scope - libcontainer container f4a8a4ad0a9a7f940b8fc15336ee404d8feffbc5f77275c284a25f3f2c821433. Feb 13 15:51:34.860988 containerd[1485]: time="2025-02-13T15:51:34.859840027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"48820988b2750fd104decb7bb391de5c1e2cfcce5ecfc733aa52940f9aa0fe98\"" Feb 13 15:51:34.861825 kubelet[2313]: E0213 15:51:34.861755 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:34.863420 containerd[1485]: time="2025-02-13T15:51:34.863370222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89c52773ed476010b52954bb0414086a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4a8a4ad0a9a7f940b8fc15336ee404d8feffbc5f77275c284a25f3f2c821433\"" Feb 13 15:51:34.865922 containerd[1485]: time="2025-02-13T15:51:34.865890386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"32a5c85180ec9b21990e89cb9eee8cc299a0c977f53f32c0fe0ee8d966ba4767\"" Feb 13 15:51:34.865983 kubelet[2313]: E0213 15:51:34.865917 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:34.866371 kubelet[2313]: E0213 15:51:34.866333 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:34.867704 containerd[1485]: time="2025-02-13T15:51:34.867651421Z" level=info msg="CreateContainer within sandbox \"48820988b2750fd104decb7bb391de5c1e2cfcce5ecfc733aa52940f9aa0fe98\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:51:34.867808 containerd[1485]: time="2025-02-13T15:51:34.867654887Z" level=info msg="CreateContainer within sandbox \"f4a8a4ad0a9a7f940b8fc15336ee404d8feffbc5f77275c284a25f3f2c821433\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:51:34.868370 containerd[1485]: time="2025-02-13T15:51:34.868342562Z" level=info msg="CreateContainer within sandbox \"32a5c85180ec9b21990e89cb9eee8cc299a0c977f53f32c0fe0ee8d966ba4767\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:51:34.888147 containerd[1485]: time="2025-02-13T15:51:34.888098600Z" level=info msg="CreateContainer within sandbox \"48820988b2750fd104decb7bb391de5c1e2cfcce5ecfc733aa52940f9aa0fe98\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"febb96db813340455e4016d959e9dfd72a7f02948055fd1f01487dd3adae1bc0\"" Feb 13 15:51:34.888685 containerd[1485]: time="2025-02-13T15:51:34.888653875Z" level=info msg="StartContainer for \"febb96db813340455e4016d959e9dfd72a7f02948055fd1f01487dd3adae1bc0\"" Feb 13 15:51:34.893084 kubelet[2313]: I0213 15:51:34.893063 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:51:34.893518 containerd[1485]: time="2025-02-13T15:51:34.893438792Z" level=info msg="CreateContainer within sandbox \"f4a8a4ad0a9a7f940b8fc15336ee404d8feffbc5f77275c284a25f3f2c821433\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"90821e1b7fc8baf3095530a95afb114ab617fd5f35ec468601cc6ded51b185f1\"" Feb 13 15:51:34.893564 kubelet[2313]: E0213 15:51:34.893496 2313 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 15:51:34.893928 containerd[1485]: time="2025-02-13T15:51:34.893904097Z" level=info msg="StartContainer for \"90821e1b7fc8baf3095530a95afb114ab617fd5f35ec468601cc6ded51b185f1\"" Feb 13 15:51:34.897571 containerd[1485]: time="2025-02-13T15:51:34.897491661Z" level=info msg="CreateContainer within sandbox \"32a5c85180ec9b21990e89cb9eee8cc299a0c977f53f32c0fe0ee8d966ba4767\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff6eeee3cb11f41d6c536cdfe138f220f8c102128ad870afbf5469bfd183d80d\"" Feb 13 15:51:34.897832 containerd[1485]: time="2025-02-13T15:51:34.897789121Z" level=info msg="StartContainer for \"ff6eeee3cb11f41d6c536cdfe138f220f8c102128ad870afbf5469bfd183d80d\"" Feb 13 15:51:34.916637 systemd[1]: Started cri-containerd-febb96db813340455e4016d959e9dfd72a7f02948055fd1f01487dd3adae1bc0.scope - libcontainer container febb96db813340455e4016d959e9dfd72a7f02948055fd1f01487dd3adae1bc0. Feb 13 15:51:34.935213 systemd[1]: Started cri-containerd-90821e1b7fc8baf3095530a95afb114ab617fd5f35ec468601cc6ded51b185f1.scope - libcontainer container 90821e1b7fc8baf3095530a95afb114ab617fd5f35ec468601cc6ded51b185f1. Feb 13 15:51:34.938567 systemd[1]: Started cri-containerd-ff6eeee3cb11f41d6c536cdfe138f220f8c102128ad870afbf5469bfd183d80d.scope - libcontainer container ff6eeee3cb11f41d6c536cdfe138f220f8c102128ad870afbf5469bfd183d80d. Feb 13 15:51:34.979264 containerd[1485]: time="2025-02-13T15:51:34.978132737Z" level=info msg="StartContainer for \"90821e1b7fc8baf3095530a95afb114ab617fd5f35ec468601cc6ded51b185f1\" returns successfully" Feb 13 15:51:34.979264 containerd[1485]: time="2025-02-13T15:51:34.978143658Z" level=info msg="StartContainer for \"febb96db813340455e4016d959e9dfd72a7f02948055fd1f01487dd3adae1bc0\" returns successfully" Feb 13 15:51:34.990912 containerd[1485]: time="2025-02-13T15:51:34.990818406Z" level=info msg="StartContainer for \"ff6eeee3cb11f41d6c536cdfe138f220f8c102128ad870afbf5469bfd183d80d\" returns successfully" Feb 13 15:51:35.816076 kubelet[2313]: E0213 15:51:35.814302 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:35.822212 kubelet[2313]: E0213 15:51:35.822172 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:35.828643 kubelet[2313]: E0213 15:51:35.828611 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:36.215937 kubelet[2313]: E0213 15:51:36.215793 2313 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 15:51:36.565267 kubelet[2313]: E0213 15:51:36.565234 2313 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 15:51:36.823529 kubelet[2313]: E0213 15:51:36.823422 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:36.823529 kubelet[2313]: E0213 15:51:36.823440 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:37.015675 kubelet[2313]: E0213 15:51:37.015637 2313 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 15:51:38.062490 kubelet[2313]: E0213 15:51:38.062455 2313 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 15:51:38.063002 kubelet[2313]: E0213 15:51:38.062523 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:51:38.095181 kubelet[2313]: I0213 15:51:38.095135 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:51:38.185822 kubelet[2313]: I0213 15:51:38.185767 2313 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:51:38.192108 kubelet[2313]: E0213 15:51:38.192080 2313 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:51:38.292650 kubelet[2313]: E0213 15:51:38.292630 2313 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:51:38.393333 kubelet[2313]: E0213 15:51:38.393184 2313 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:51:38.493760 kubelet[2313]: E0213 15:51:38.493716 2313 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:51:38.594372 kubelet[2313]: E0213 15:51:38.594315 2313 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:51:38.779875 kubelet[2313]: I0213 15:51:38.779816 2313 apiserver.go:52] "Watching apiserver" Feb 13 15:51:38.784774 kubelet[2313]: I0213 15:51:38.784695 2313 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:51:39.635177 systemd[1]: Reloading requested from client PID 2590 ('systemctl') (unit session-9.scope)... Feb 13 15:51:39.635191 systemd[1]: Reloading... Feb 13 15:51:39.703089 zram_generator::config[2632]: No configuration found. Feb 13 15:51:40.017942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:51:40.110035 systemd[1]: Reloading finished in 474 ms. Feb 13 15:51:40.157539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:51:40.180430 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:51:40.180731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:40.193258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:51:40.335599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:51:40.340488 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:51:40.383086 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:51:40.383086 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:51:40.383086 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:51:40.383514 kubelet[2674]: I0213 15:51:40.383150 2674 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:51:40.388463 kubelet[2674]: I0213 15:51:40.388435 2674 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:51:40.388463 kubelet[2674]: I0213 15:51:40.388457 2674 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:51:40.388612 kubelet[2674]: I0213 15:51:40.388596 2674 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:51:40.389897 kubelet[2674]: I0213 15:51:40.389871 2674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:51:40.392202 kubelet[2674]: I0213 15:51:40.392180 2674 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:51:40.400061 kubelet[2674]: I0213 15:51:40.400018 2674 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:51:40.400320 kubelet[2674]: I0213 15:51:40.400297 2674 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:51:40.400462 kubelet[2674]: I0213 15:51:40.400440 2674 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:51:40.400539 kubelet[2674]: I0213 15:51:40.400467 2674 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:51:40.400539 kubelet[2674]: I0213 15:51:40.400477 2674 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:51:40.400539 kubelet[2674]: I0213 15:51:40.400513 2674 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:51:40.400615 kubelet[2674]: I0213 15:51:40.400601 2674 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:51:40.400615 kubelet[2674]: I0213 15:51:40.400615 2674 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:51:40.400665 kubelet[2674]: I0213 15:51:40.400638 2674 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:51:40.400665 kubelet[2674]: I0213 15:51:40.400655 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:51:40.401479 kubelet[2674]: I0213 15:51:40.401196 2674 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:51:40.401479 kubelet[2674]: I0213 15:51:40.401354 2674 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:51:40.402487 kubelet[2674]: I0213 15:51:40.402460 2674 server.go:1256] "Started kubelet" Feb 13 15:51:40.402611 kubelet[2674]: I0213 15:51:40.402591 2674 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:51:40.402700 kubelet[2674]: I0213 15:51:40.402676 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:51:40.402944 kubelet[2674]: I0213 15:51:40.402909 2674 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:51:40.404435 kubelet[2674]: I0213 15:51:40.404411 2674 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:51:40.407482 kubelet[2674]: I0213 15:51:40.407458 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:51:40.411075 kubelet[2674]: E0213 15:51:40.411029 2674 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:51:40.411229 kubelet[2674]: I0213 15:51:40.411146 2674 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:51:40.411780 kubelet[2674]: I0213 15:51:40.411495 2674 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:51:40.411780 kubelet[2674]: I0213 15:51:40.411775 2674 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:51:40.414218 kubelet[2674]: I0213 15:51:40.414192 2674 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:51:40.414330 kubelet[2674]: I0213 15:51:40.414305 2674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:51:40.415478 kubelet[2674]: I0213 15:51:40.415455 2674 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:51:40.418279 kubelet[2674]: E0213 15:51:40.418258 2674 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:51:40.420400 kubelet[2674]: I0213 15:51:40.420366 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:51:40.421985 kubelet[2674]: I0213 15:51:40.421968 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:51:40.422101 kubelet[2674]: I0213 15:51:40.422090 2674 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:51:40.422183 kubelet[2674]: I0213 15:51:40.422173 2674 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:51:40.422412 kubelet[2674]: E0213 15:51:40.422367 2674 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:51:40.445349 kubelet[2674]: I0213 15:51:40.445312 2674 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:51:40.445349 kubelet[2674]: I0213 15:51:40.445335 2674 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:51:40.445349 kubelet[2674]: I0213 15:51:40.445353 2674 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:51:40.445533 kubelet[2674]: I0213 15:51:40.445508 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:51:40.445533 kubelet[2674]: I0213 15:51:40.445532 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:51:40.445583 kubelet[2674]: I0213 15:51:40.445541 2674 policy_none.go:49] "None policy: Start" Feb 13 15:51:40.446087 kubelet[2674]: I0213 15:51:40.446040 2674 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:51:40.446123 kubelet[2674]: I0213 15:51:40.446090 2674 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:51:40.446266 kubelet[2674]: I0213 15:51:40.446243 2674 state_mem.go:75] "Updated machine memory state" Feb 13 15:51:40.450395 kubelet[2674]: I0213 15:51:40.450380 2674 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:51:40.450672 kubelet[2674]: I0213 15:51:40.450610 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:51:40.518535 kubelet[2674]: I0213 15:51:40.518501 2674 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:51:40.523185 kubelet[2674]: I0213 15:51:40.523149 2674 topology_manager.go:215] "Topology Admit Handler" podUID="89c52773ed476010b52954bb0414086a" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:51:40.523259 kubelet[2674]: I0213 15:51:40.523235 2674 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:51:40.523302 kubelet[2674]: I0213 15:51:40.523284 2674 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:51:40.713397 kubelet[2674]: I0213 15:51:40.713026 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89c52773ed476010b52954bb0414086a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89c52773ed476010b52954bb0414086a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:51:40.713397 kubelet[2674]: I0213 15:51:40.713096 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:40.713397 kubelet[2674]: I0213 15:51:40.713116 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:40.713397 kubelet[2674]: I0213 15:51:40.713137 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:40.713397 kubelet[2674]: I0213 15:51:40.713156 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:51:40.713611 kubelet[2674]: I0213 15:51:40.713174 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89c52773ed476010b52954bb0414086a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89c52773ed476010b52954bb0414086a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:51:40.713611 kubelet[2674]: I0213 15:51:40.713193 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89c52773ed476010b52954bb0414086a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89c52773ed476010b52954bb0414086a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:51:40.713611 kubelet[2674]: I0213 15:51:40.713224 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:40.713611 kubelet[2674]: I0213 15:51:40.713243 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:51:41.020223 kubelet[2674]: E0213 15:51:41.020171 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:41.029866 kubelet[2674]: I0213 15:51:41.029267 2674 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:51:41.029866 kubelet[2674]: I0213 15:51:41.029384 2674 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:51:41.029866 kubelet[2674]: E0213 15:51:41.029284 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:41.029866 kubelet[2674]: E0213 15:51:41.029737 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:41.401237 kubelet[2674]: I0213 15:51:41.401121 2674 apiserver.go:52] "Watching apiserver" Feb 13 15:51:41.412797 kubelet[2674]: I0213 15:51:41.412741 2674 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:51:41.432291 kubelet[2674]: E0213 15:51:41.432258 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:41.432995 kubelet[2674]: E0213 15:51:41.432617 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:41.447921 kubelet[2674]: E0213 15:51:41.447352 2674 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:51:41.447921 kubelet[2674]: E0213 15:51:41.447853 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:41.533683 kubelet[2674]: I0213 15:51:41.533642 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.533596042 podStartE2EDuration="1.533596042s" podCreationTimestamp="2025-02-13 15:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:51:41.523376147 +0000 UTC m=+1.178751856" watchObservedRunningTime="2025-02-13 15:51:41.533596042 +0000 UTC m=+1.188971751" Feb 13 15:51:41.540916 kubelet[2674]: I0213 15:51:41.540784 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.540749161 podStartE2EDuration="1.540749161s" podCreationTimestamp="2025-02-13 15:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:51:41.533788534 +0000 UTC m=+1.189164243" watchObservedRunningTime="2025-02-13 15:51:41.540749161 +0000 UTC m=+1.196124870" Feb 13 15:51:41.547935 kubelet[2674]: I0213 15:51:41.547905 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.547874117 podStartE2EDuration="1.547874117s" podCreationTimestamp="2025-02-13 15:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:51:41.54127502 +0000 UTC m=+1.196650729" watchObservedRunningTime="2025-02-13 15:51:41.547874117 +0000 UTC m=+1.203249826" Feb 13 15:51:42.433005 kubelet[2674]: E0213 15:51:42.432962 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:42.979183 update_engine[1472]: I20250213 15:51:42.979097 1472 update_attempter.cc:509] Updating boot flags... Feb 13 15:51:43.043398 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2729) Feb 13 15:51:43.120085 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2729) Feb 13 15:51:43.434186 kubelet[2674]: E0213 15:51:43.434160 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:45.878650 kubelet[2674]: E0213 15:51:45.877323 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:46.181039 sudo[1682]: pam_unix(sudo:session): session closed for user root Feb 13 15:51:46.187217 sshd[1681]: Connection closed by 10.0.0.1 port 42206 Feb 13 15:51:46.187790 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Feb 13 15:51:46.193494 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:42206.service: Deactivated successfully. Feb 13 15:51:46.197335 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:51:46.197567 systemd[1]: session-9.scope: Consumed 5.309s CPU time, 189.4M memory peak, 0B memory swap peak. Feb 13 15:51:46.202303 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:51:46.203951 systemd-logind[1471]: Removed session 9. Feb 13 15:51:46.332321 kubelet[2674]: E0213 15:51:46.331657 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:46.445180 kubelet[2674]: E0213 15:51:46.444667 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:46.449142 kubelet[2674]: E0213 15:51:46.449080 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:47.446175 kubelet[2674]: E0213 15:51:47.446142 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:50.599500 kubelet[2674]: E0213 15:51:50.599444 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:51.451220 kubelet[2674]: E0213 15:51:51.451085 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:54.615175 kubelet[2674]: I0213 15:51:54.615146 2674 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:51:54.615595 containerd[1485]: time="2025-02-13T15:51:54.615497232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:51:54.615859 kubelet[2674]: I0213 15:51:54.615676 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:51:55.232087 kubelet[2674]: I0213 15:51:55.231385 2674 topology_manager.go:215] "Topology Admit Handler" podUID="1696cb37-ef5c-4c1e-bfba-d2dbe26553cf" podNamespace="kube-system" podName="kube-proxy-klxf8" Feb 13 15:51:55.243269 systemd[1]: Created slice kubepods-besteffort-pod1696cb37_ef5c_4c1e_bfba_d2dbe26553cf.slice - libcontainer container kubepods-besteffort-pod1696cb37_ef5c_4c1e_bfba_d2dbe26553cf.slice. Feb 13 15:51:55.372479 kubelet[2674]: I0213 15:51:55.372426 2674 topology_manager.go:215] "Topology Admit Handler" podUID="2670704c-9a2e-497f-a0ca-fcd50361431b" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-kz2ws" Feb 13 15:51:55.378137 systemd[1]: Created slice kubepods-besteffort-pod2670704c_9a2e_497f_a0ca_fcd50361431b.slice - libcontainer container kubepods-besteffort-pod2670704c_9a2e_497f_a0ca_fcd50361431b.slice. Feb 13 15:51:55.421706 kubelet[2674]: I0213 15:51:55.421664 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1696cb37-ef5c-4c1e-bfba-d2dbe26553cf-xtables-lock\") pod \"kube-proxy-klxf8\" (UID: \"1696cb37-ef5c-4c1e-bfba-d2dbe26553cf\") " pod="kube-system/kube-proxy-klxf8" Feb 13 15:51:55.421706 kubelet[2674]: I0213 15:51:55.421708 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1696cb37-ef5c-4c1e-bfba-d2dbe26553cf-lib-modules\") pod \"kube-proxy-klxf8\" (UID: \"1696cb37-ef5c-4c1e-bfba-d2dbe26553cf\") " pod="kube-system/kube-proxy-klxf8" Feb 13 15:51:55.421706 kubelet[2674]: I0213 15:51:55.421738 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1696cb37-ef5c-4c1e-bfba-d2dbe26553cf-kube-proxy\") pod \"kube-proxy-klxf8\" (UID: \"1696cb37-ef5c-4c1e-bfba-d2dbe26553cf\") " pod="kube-system/kube-proxy-klxf8" Feb 13 15:51:55.421967 kubelet[2674]: I0213 15:51:55.421758 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsf8\" (UniqueName: \"kubernetes.io/projected/1696cb37-ef5c-4c1e-bfba-d2dbe26553cf-kube-api-access-qhsf8\") pod \"kube-proxy-klxf8\" (UID: \"1696cb37-ef5c-4c1e-bfba-d2dbe26553cf\") " pod="kube-system/kube-proxy-klxf8" Feb 13 15:51:55.522446 kubelet[2674]: I0213 15:51:55.522135 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58zz2\" (UniqueName: \"kubernetes.io/projected/2670704c-9a2e-497f-a0ca-fcd50361431b-kube-api-access-58zz2\") pod \"tigera-operator-c7ccbd65-kz2ws\" (UID: \"2670704c-9a2e-497f-a0ca-fcd50361431b\") " pod="tigera-operator/tigera-operator-c7ccbd65-kz2ws" Feb 13 15:51:55.522446 kubelet[2674]: I0213 15:51:55.522212 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2670704c-9a2e-497f-a0ca-fcd50361431b-var-lib-calico\") pod \"tigera-operator-c7ccbd65-kz2ws\" (UID: \"2670704c-9a2e-497f-a0ca-fcd50361431b\") " pod="tigera-operator/tigera-operator-c7ccbd65-kz2ws" Feb 13 15:51:55.554848 kubelet[2674]: E0213 15:51:55.554821 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:55.555477 containerd[1485]: time="2025-02-13T15:51:55.555277825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-klxf8,Uid:1696cb37-ef5c-4c1e-bfba-d2dbe26553cf,Namespace:kube-system,Attempt:0,}" Feb 13 15:51:55.982141 containerd[1485]: time="2025-02-13T15:51:55.981987278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-kz2ws,Uid:2670704c-9a2e-497f-a0ca-fcd50361431b,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:51:56.224588 containerd[1485]: time="2025-02-13T15:51:56.224494197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:51:56.224588 containerd[1485]: time="2025-02-13T15:51:56.224557616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:51:56.224588 containerd[1485]: time="2025-02-13T15:51:56.224571022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:56.224764 containerd[1485]: time="2025-02-13T15:51:56.224663094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:56.250220 systemd[1]: Started cri-containerd-e1ef0dabef1005fc3433010fa5afcc216eb0d1efb831ca5878fdb7a2b47c36e6.scope - libcontainer container e1ef0dabef1005fc3433010fa5afcc216eb0d1efb831ca5878fdb7a2b47c36e6. Feb 13 15:51:56.273018 containerd[1485]: time="2025-02-13T15:51:56.272949485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-klxf8,Uid:1696cb37-ef5c-4c1e-bfba-d2dbe26553cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ef0dabef1005fc3433010fa5afcc216eb0d1efb831ca5878fdb7a2b47c36e6\"" Feb 13 15:51:56.275130 kubelet[2674]: E0213 15:51:56.274648 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:56.276938 containerd[1485]: time="2025-02-13T15:51:56.276904242Z" level=info msg="CreateContainer within sandbox \"e1ef0dabef1005fc3433010fa5afcc216eb0d1efb831ca5878fdb7a2b47c36e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:51:56.739187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702211992.mount: Deactivated successfully. Feb 13 15:51:56.809880 containerd[1485]: time="2025-02-13T15:51:56.809755496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:51:56.809880 containerd[1485]: time="2025-02-13T15:51:56.809832892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:51:56.809880 containerd[1485]: time="2025-02-13T15:51:56.809845596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:56.810126 containerd[1485]: time="2025-02-13T15:51:56.809931277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:51:56.832210 systemd[1]: Started cri-containerd-e462c756a37c62f3b668676207af373236468413814f81235e9e34c639dab6ec.scope - libcontainer container e462c756a37c62f3b668676207af373236468413814f81235e9e34c639dab6ec. Feb 13 15:51:56.869919 containerd[1485]: time="2025-02-13T15:51:56.869870319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-kz2ws,Uid:2670704c-9a2e-497f-a0ca-fcd50361431b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e462c756a37c62f3b668676207af373236468413814f81235e9e34c639dab6ec\"" Feb 13 15:51:56.871555 containerd[1485]: time="2025-02-13T15:51:56.871531228Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:51:56.884478 containerd[1485]: time="2025-02-13T15:51:56.884416784Z" level=info msg="CreateContainer within sandbox \"e1ef0dabef1005fc3433010fa5afcc216eb0d1efb831ca5878fdb7a2b47c36e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c74d3e9297f0a1c3129a7f372d1889ff2d9b8846a0e9b1f4dc5a7fdc7529aa07\"" Feb 13 15:51:56.884987 containerd[1485]: time="2025-02-13T15:51:56.884958282Z" level=info msg="StartContainer for \"c74d3e9297f0a1c3129a7f372d1889ff2d9b8846a0e9b1f4dc5a7fdc7529aa07\"" Feb 13 15:51:56.919822 systemd[1]: Started cri-containerd-c74d3e9297f0a1c3129a7f372d1889ff2d9b8846a0e9b1f4dc5a7fdc7529aa07.scope - libcontainer container c74d3e9297f0a1c3129a7f372d1889ff2d9b8846a0e9b1f4dc5a7fdc7529aa07. Feb 13 15:51:56.989673 containerd[1485]: time="2025-02-13T15:51:56.989538411Z" level=info msg="StartContainer for \"c74d3e9297f0a1c3129a7f372d1889ff2d9b8846a0e9b1f4dc5a7fdc7529aa07\" returns successfully" Feb 13 15:51:57.464632 kubelet[2674]: E0213 15:51:57.464600 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:51:58.466109 kubelet[2674]: E0213 15:51:58.466079 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:01.181525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176977478.mount: Deactivated successfully. Feb 13 15:52:03.337810 containerd[1485]: time="2025-02-13T15:52:03.337756545Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:03.374955 containerd[1485]: time="2025-02-13T15:52:03.374883321Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 15:52:03.383761 containerd[1485]: time="2025-02-13T15:52:03.383718887Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:03.397298 containerd[1485]: time="2025-02-13T15:52:03.397253666Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:03.398052 containerd[1485]: time="2025-02-13T15:52:03.398014153Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 6.52645379s" Feb 13 15:52:03.398103 containerd[1485]: time="2025-02-13T15:52:03.398061272Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 15:52:03.399583 containerd[1485]: time="2025-02-13T15:52:03.399557651Z" level=info msg="CreateContainer within sandbox \"e462c756a37c62f3b668676207af373236468413814f81235e9e34c639dab6ec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:52:03.793041 containerd[1485]: time="2025-02-13T15:52:03.792984646Z" level=info msg="CreateContainer within sandbox \"e462c756a37c62f3b668676207af373236468413814f81235e9e34c639dab6ec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5c5f339b9196b6dc2f3bfe56e168d32dd87614f467eda45a373476367d9084b4\"" Feb 13 15:52:03.793556 containerd[1485]: time="2025-02-13T15:52:03.793499252Z" level=info msg="StartContainer for \"5c5f339b9196b6dc2f3bfe56e168d32dd87614f467eda45a373476367d9084b4\"" Feb 13 15:52:03.832371 systemd[1]: Started cri-containerd-5c5f339b9196b6dc2f3bfe56e168d32dd87614f467eda45a373476367d9084b4.scope - libcontainer container 5c5f339b9196b6dc2f3bfe56e168d32dd87614f467eda45a373476367d9084b4. Feb 13 15:52:03.860864 containerd[1485]: time="2025-02-13T15:52:03.860818588Z" level=info msg="StartContainer for \"5c5f339b9196b6dc2f3bfe56e168d32dd87614f467eda45a373476367d9084b4\" returns successfully" Feb 13 15:52:04.483458 kubelet[2674]: I0213 15:52:04.483421 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-klxf8" podStartSLOduration=9.483376725 podStartE2EDuration="9.483376725s" podCreationTimestamp="2025-02-13 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:51:57.495139298 +0000 UTC m=+17.150515007" watchObservedRunningTime="2025-02-13 15:52:04.483376725 +0000 UTC m=+24.138752434" Feb 13 15:52:07.174178 kubelet[2674]: I0213 15:52:07.174125 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-kz2ws" podStartSLOduration=5.646693194 podStartE2EDuration="12.174067853s" podCreationTimestamp="2025-02-13 15:51:55 +0000 UTC" firstStartedPulling="2025-02-13 15:51:56.870950799 +0000 UTC m=+16.526326508" lastFinishedPulling="2025-02-13 15:52:03.398325458 +0000 UTC m=+23.053701167" observedRunningTime="2025-02-13 15:52:04.483741188 +0000 UTC m=+24.139116897" watchObservedRunningTime="2025-02-13 15:52:07.174067853 +0000 UTC m=+26.829443562" Feb 13 15:52:07.174835 kubelet[2674]: I0213 15:52:07.174368 2674 topology_manager.go:215] "Topology Admit Handler" podUID="3d252919-2be3-4278-a68e-d1694f8533b4" podNamespace="calico-system" podName="calico-typha-55567fb459-xb5bj" Feb 13 15:52:07.187990 systemd[1]: Created slice kubepods-besteffort-pod3d252919_2be3_4278_a68e_d1694f8533b4.slice - libcontainer container kubepods-besteffort-pod3d252919_2be3_4278_a68e_d1694f8533b4.slice. Feb 13 15:52:07.221808 kubelet[2674]: I0213 15:52:07.221752 2674 topology_manager.go:215] "Topology Admit Handler" podUID="c2ab9614-fb1f-4ee9-aca7-e9160de27798" podNamespace="calico-system" podName="calico-node-zpbbd" Feb 13 15:52:07.232580 systemd[1]: Created slice kubepods-besteffort-podc2ab9614_fb1f_4ee9_aca7_e9160de27798.slice - libcontainer container kubepods-besteffort-podc2ab9614_fb1f_4ee9_aca7_e9160de27798.slice. Feb 13 15:52:07.302929 kubelet[2674]: I0213 15:52:07.302871 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d252919-2be3-4278-a68e-d1694f8533b4-tigera-ca-bundle\") pod \"calico-typha-55567fb459-xb5bj\" (UID: \"3d252919-2be3-4278-a68e-d1694f8533b4\") " pod="calico-system/calico-typha-55567fb459-xb5bj" Feb 13 15:52:07.302929 kubelet[2674]: I0213 15:52:07.302929 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d252919-2be3-4278-a68e-d1694f8533b4-typha-certs\") pod \"calico-typha-55567fb459-xb5bj\" (UID: \"3d252919-2be3-4278-a68e-d1694f8533b4\") " pod="calico-system/calico-typha-55567fb459-xb5bj" Feb 13 15:52:07.303115 kubelet[2674]: I0213 15:52:07.302949 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzcl\" (UniqueName: \"kubernetes.io/projected/3d252919-2be3-4278-a68e-d1694f8533b4-kube-api-access-gwzcl\") pod \"calico-typha-55567fb459-xb5bj\" (UID: \"3d252919-2be3-4278-a68e-d1694f8533b4\") " pod="calico-system/calico-typha-55567fb459-xb5bj" Feb 13 15:52:07.329817 kubelet[2674]: I0213 15:52:07.329775 2674 topology_manager.go:215] "Topology Admit Handler" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" podNamespace="calico-system" podName="csi-node-driver-g6vd2" Feb 13 15:52:07.330128 kubelet[2674]: E0213 15:52:07.330109 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:07.403787 kubelet[2674]: I0213 15:52:07.403749 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-xtables-lock\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.403787 kubelet[2674]: I0213 15:52:07.403790 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-cni-log-dir\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.403927 kubelet[2674]: I0213 15:52:07.403810 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2ab9614-fb1f-4ee9-aca7-e9160de27798-tigera-ca-bundle\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.403927 kubelet[2674]: I0213 15:52:07.403890 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-var-run-calico\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404010 kubelet[2674]: I0213 15:52:07.403949 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-cni-net-dir\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404010 kubelet[2674]: I0213 15:52:07.403970 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrhmp\" (UniqueName: \"kubernetes.io/projected/c2ab9614-fb1f-4ee9-aca7-e9160de27798-kube-api-access-mrhmp\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404074 kubelet[2674]: I0213 15:52:07.404015 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-policysync\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404167 kubelet[2674]: I0213 15:52:07.404097 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-cni-bin-dir\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404970 kubelet[2674]: I0213 15:52:07.404328 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-flexvol-driver-host\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404970 kubelet[2674]: I0213 15:52:07.404387 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-var-lib-calico\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404970 kubelet[2674]: I0213 15:52:07.404406 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ab9614-fb1f-4ee9-aca7-e9160de27798-lib-modules\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.404970 kubelet[2674]: I0213 15:52:07.404424 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c2ab9614-fb1f-4ee9-aca7-e9160de27798-node-certs\") pod \"calico-node-zpbbd\" (UID: \"c2ab9614-fb1f-4ee9-aca7-e9160de27798\") " pod="calico-system/calico-node-zpbbd" Feb 13 15:52:07.492389 kubelet[2674]: E0213 15:52:07.492345 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:07.492878 containerd[1485]: time="2025-02-13T15:52:07.492814587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55567fb459-xb5bj,Uid:3d252919-2be3-4278-a68e-d1694f8533b4,Namespace:calico-system,Attempt:0,}" Feb 13 15:52:07.505103 kubelet[2674]: I0213 15:52:07.505064 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/10d7d66d-1867-4427-ba49-4c93c2b786fc-registration-dir\") pod \"csi-node-driver-g6vd2\" (UID: \"10d7d66d-1867-4427-ba49-4c93c2b786fc\") " pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:07.505103 kubelet[2674]: I0213 15:52:07.505113 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10d7d66d-1867-4427-ba49-4c93c2b786fc-kubelet-dir\") pod \"csi-node-driver-g6vd2\" (UID: \"10d7d66d-1867-4427-ba49-4c93c2b786fc\") " pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:07.505456 kubelet[2674]: I0213 15:52:07.505411 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/10d7d66d-1867-4427-ba49-4c93c2b786fc-varrun\") pod \"csi-node-driver-g6vd2\" (UID: \"10d7d66d-1867-4427-ba49-4c93c2b786fc\") " pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:07.505456 kubelet[2674]: I0213 15:52:07.505454 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/10d7d66d-1867-4427-ba49-4c93c2b786fc-socket-dir\") pod \"csi-node-driver-g6vd2\" (UID: \"10d7d66d-1867-4427-ba49-4c93c2b786fc\") " pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:07.505620 kubelet[2674]: I0213 15:52:07.505552 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdh5\" (UniqueName: \"kubernetes.io/projected/10d7d66d-1867-4427-ba49-4c93c2b786fc-kube-api-access-xgdh5\") pod \"csi-node-driver-g6vd2\" (UID: \"10d7d66d-1867-4427-ba49-4c93c2b786fc\") " pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:07.506532 kubelet[2674]: E0213 15:52:07.506482 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.506532 kubelet[2674]: W0213 15:52:07.506496 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.506532 kubelet[2674]: E0213 15:52:07.506534 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.506834 kubelet[2674]: E0213 15:52:07.506753 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.506834 kubelet[2674]: W0213 15:52:07.506763 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.506834 kubelet[2674]: E0213 15:52:07.506777 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.511034 kubelet[2674]: E0213 15:52:07.510998 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.511034 kubelet[2674]: W0213 15:52:07.511026 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.511218 kubelet[2674]: E0213 15:52:07.511085 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.514884 kubelet[2674]: E0213 15:52:07.514863 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.515079 kubelet[2674]: W0213 15:52:07.514983 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.515079 kubelet[2674]: E0213 15:52:07.515014 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.531736 containerd[1485]: time="2025-02-13T15:52:07.531503587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:07.531736 containerd[1485]: time="2025-02-13T15:52:07.531564562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:07.531736 containerd[1485]: time="2025-02-13T15:52:07.531577357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:07.531736 containerd[1485]: time="2025-02-13T15:52:07.531685690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:07.535927 kubelet[2674]: E0213 15:52:07.535900 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:07.537092 containerd[1485]: time="2025-02-13T15:52:07.536913493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpbbd,Uid:c2ab9614-fb1f-4ee9-aca7-e9160de27798,Namespace:calico-system,Attempt:0,}" Feb 13 15:52:07.556073 systemd[1]: Started cri-containerd-308027013152081e0856d34cdae2f48af09a7fa43a3bd5a8fdd689a51b096325.scope - libcontainer container 308027013152081e0856d34cdae2f48af09a7fa43a3bd5a8fdd689a51b096325. Feb 13 15:52:07.562500 containerd[1485]: time="2025-02-13T15:52:07.562202881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:07.562500 containerd[1485]: time="2025-02-13T15:52:07.562263476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:07.562500 containerd[1485]: time="2025-02-13T15:52:07.562277723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:07.562500 containerd[1485]: time="2025-02-13T15:52:07.562360558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:07.591329 systemd[1]: Started cri-containerd-8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9.scope - libcontainer container 8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9. Feb 13 15:52:07.606093 kubelet[2674]: E0213 15:52:07.606023 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.606093 kubelet[2674]: W0213 15:52:07.606066 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.606296 kubelet[2674]: E0213 15:52:07.606105 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.607069 containerd[1485]: time="2025-02-13T15:52:07.606954603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55567fb459-xb5bj,Uid:3d252919-2be3-4278-a68e-d1694f8533b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"308027013152081e0856d34cdae2f48af09a7fa43a3bd5a8fdd689a51b096325\"" Feb 13 15:52:07.607376 kubelet[2674]: E0213 15:52:07.607002 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.607376 kubelet[2674]: W0213 15:52:07.607013 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.607376 kubelet[2674]: E0213 15:52:07.607072 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.607499 kubelet[2674]: E0213 15:52:07.607460 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.607499 kubelet[2674]: W0213 15:52:07.607471 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.607836 kubelet[2674]: E0213 15:52:07.607695 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.607836 kubelet[2674]: E0213 15:52:07.607727 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.607836 kubelet[2674]: W0213 15:52:07.607738 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.607836 kubelet[2674]: E0213 15:52:07.607752 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.608249 kubelet[2674]: E0213 15:52:07.608134 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.608249 kubelet[2674]: W0213 15:52:07.608147 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.608249 kubelet[2674]: E0213 15:52:07.608162 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.608621 kubelet[2674]: E0213 15:52:07.608498 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.608621 kubelet[2674]: W0213 15:52:07.608511 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.608621 kubelet[2674]: E0213 15:52:07.608531 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.609123 kubelet[2674]: E0213 15:52:07.608977 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.609123 kubelet[2674]: W0213 15:52:07.608989 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.609123 kubelet[2674]: E0213 15:52:07.609009 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.609512 kubelet[2674]: E0213 15:52:07.609488 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.609512 kubelet[2674]: W0213 15:52:07.609503 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.609595 kubelet[2674]: E0213 15:52:07.609522 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.610095 kubelet[2674]: E0213 15:52:07.609729 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.610095 kubelet[2674]: W0213 15:52:07.609742 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.610095 kubelet[2674]: E0213 15:52:07.609870 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.610095 kubelet[2674]: E0213 15:52:07.609888 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:07.610248 kubelet[2674]: E0213 15:52:07.610171 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.610248 kubelet[2674]: W0213 15:52:07.610182 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.610308 kubelet[2674]: E0213 15:52:07.610258 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.611275 kubelet[2674]: E0213 15:52:07.610521 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.611275 kubelet[2674]: W0213 15:52:07.610534 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.611275 kubelet[2674]: E0213 15:52:07.610648 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.611275 kubelet[2674]: E0213 15:52:07.610758 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.611275 kubelet[2674]: W0213 15:52:07.610766 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.611275 kubelet[2674]: E0213 15:52:07.610850 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.611275 kubelet[2674]: E0213 15:52:07.610962 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.611275 kubelet[2674]: W0213 15:52:07.610971 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.611275 kubelet[2674]: E0213 15:52:07.610984 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.611629 kubelet[2674]: E0213 15:52:07.611323 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.611629 kubelet[2674]: W0213 15:52:07.611336 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.611629 kubelet[2674]: E0213 15:52:07.611351 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.611859 kubelet[2674]: E0213 15:52:07.611824 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.611859 kubelet[2674]: W0213 15:52:07.611836 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.611859 kubelet[2674]: E0213 15:52:07.611851 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.612990 kubelet[2674]: E0213 15:52:07.612117 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.612990 kubelet[2674]: W0213 15:52:07.612127 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.612990 kubelet[2674]: E0213 15:52:07.612418 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.612990 kubelet[2674]: W0213 15:52:07.612426 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.612990 kubelet[2674]: E0213 15:52:07.612693 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.612990 kubelet[2674]: W0213 15:52:07.612701 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.612990 kubelet[2674]: E0213 15:52:07.612711 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.612990 kubelet[2674]: E0213 15:52:07.612842 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.612990 kubelet[2674]: E0213 15:52:07.612862 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.613458 containerd[1485]: time="2025-02-13T15:52:07.611853328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:52:07.613502 kubelet[2674]: E0213 15:52:07.613014 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.613502 kubelet[2674]: W0213 15:52:07.613024 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.613502 kubelet[2674]: E0213 15:52:07.613422 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.613502 kubelet[2674]: W0213 15:52:07.613443 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.613502 kubelet[2674]: E0213 15:52:07.613462 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.613788 kubelet[2674]: E0213 15:52:07.613540 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.613936 kubelet[2674]: E0213 15:52:07.613919 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.613936 kubelet[2674]: W0213 15:52:07.613931 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.614013 kubelet[2674]: E0213 15:52:07.613945 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.615774 kubelet[2674]: E0213 15:52:07.614834 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.615774 kubelet[2674]: W0213 15:52:07.615106 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.615774 kubelet[2674]: E0213 15:52:07.615161 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.615774 kubelet[2674]: E0213 15:52:07.615293 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.615774 kubelet[2674]: W0213 15:52:07.615301 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.615774 kubelet[2674]: E0213 15:52:07.615312 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.615774 kubelet[2674]: E0213 15:52:07.615481 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.615774 kubelet[2674]: W0213 15:52:07.615488 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.615774 kubelet[2674]: E0213 15:52:07.615497 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.615774 kubelet[2674]: E0213 15:52:07.615683 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.616140 kubelet[2674]: W0213 15:52:07.615689 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.616140 kubelet[2674]: E0213 15:52:07.615699 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:07.622619 containerd[1485]: time="2025-02-13T15:52:07.622539016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpbbd,Uid:c2ab9614-fb1f-4ee9-aca7-e9160de27798,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9\"" Feb 13 15:52:07.623268 kubelet[2674]: E0213 15:52:07.623131 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:07.633708 kubelet[2674]: E0213 15:52:07.633675 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:07.633708 kubelet[2674]: W0213 15:52:07.633690 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:07.633708 kubelet[2674]: E0213 15:52:07.633708 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:09.423298 kubelet[2674]: E0213 15:52:09.423236 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:11.150258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979559427.mount: Deactivated successfully. Feb 13 15:52:11.423436 kubelet[2674]: E0213 15:52:11.423275 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:11.579378 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:34886.service - OpenSSH per-connection server daemon (10.0.0.1:34886). Feb 13 15:52:11.636150 sshd[3220]: Accepted publickey for core from 10.0.0.1 port 34886 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:11.637906 sshd-session[3220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:11.643251 systemd-logind[1471]: New session 10 of user core. Feb 13 15:52:11.650205 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:52:11.863971 sshd[3226]: Connection closed by 10.0.0.1 port 34886 Feb 13 15:52:11.865346 sshd-session[3220]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:11.868675 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:34886.service: Deactivated successfully. Feb 13 15:52:11.871101 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:52:11.872149 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:52:11.874426 systemd-logind[1471]: Removed session 10. Feb 13 15:52:12.229361 containerd[1485]: time="2025-02-13T15:52:12.229204911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:12.261669 containerd[1485]: time="2025-02-13T15:52:12.261578130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 15:52:12.273220 containerd[1485]: time="2025-02-13T15:52:12.273158295Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:12.290676 containerd[1485]: time="2025-02-13T15:52:12.290605000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:12.291355 containerd[1485]: time="2025-02-13T15:52:12.291313049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 4.67942237s" Feb 13 15:52:12.291395 containerd[1485]: time="2025-02-13T15:52:12.291359045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 15:52:12.295646 containerd[1485]: time="2025-02-13T15:52:12.295620714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:52:12.313456 containerd[1485]: time="2025-02-13T15:52:12.312592407Z" level=info msg="CreateContainer within sandbox \"308027013152081e0856d34cdae2f48af09a7fa43a3bd5a8fdd689a51b096325\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:52:12.345666 containerd[1485]: time="2025-02-13T15:52:12.345597153Z" level=info msg="CreateContainer within sandbox \"308027013152081e0856d34cdae2f48af09a7fa43a3bd5a8fdd689a51b096325\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6aebc114d5316d6aa75300a09427529311859879eb96de52d63cee469c65cbb7\"" Feb 13 15:52:12.346241 containerd[1485]: time="2025-02-13T15:52:12.346196408Z" level=info msg="StartContainer for \"6aebc114d5316d6aa75300a09427529311859879eb96de52d63cee469c65cbb7\"" Feb 13 15:52:12.383384 systemd[1]: Started cri-containerd-6aebc114d5316d6aa75300a09427529311859879eb96de52d63cee469c65cbb7.scope - libcontainer container 6aebc114d5316d6aa75300a09427529311859879eb96de52d63cee469c65cbb7. Feb 13 15:52:12.680979 containerd[1485]: time="2025-02-13T15:52:12.680922178Z" level=info msg="StartContainer for \"6aebc114d5316d6aa75300a09427529311859879eb96de52d63cee469c65cbb7\" returns successfully" Feb 13 15:52:12.688167 kubelet[2674]: E0213 15:52:12.688135 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:12.741561 kubelet[2674]: E0213 15:52:12.741525 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.741561 kubelet[2674]: W0213 15:52:12.741554 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.741749 kubelet[2674]: E0213 15:52:12.741578 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.741817 kubelet[2674]: E0213 15:52:12.741802 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.741817 kubelet[2674]: W0213 15:52:12.741811 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.741865 kubelet[2674]: E0213 15:52:12.741827 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.742061 kubelet[2674]: E0213 15:52:12.742033 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.742061 kubelet[2674]: W0213 15:52:12.742056 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.742126 kubelet[2674]: E0213 15:52:12.742066 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.742259 kubelet[2674]: E0213 15:52:12.742248 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.742259 kubelet[2674]: W0213 15:52:12.742257 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.742318 kubelet[2674]: E0213 15:52:12.742266 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.742463 kubelet[2674]: E0213 15:52:12.742451 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.742463 kubelet[2674]: W0213 15:52:12.742460 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.742527 kubelet[2674]: E0213 15:52:12.742474 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.742653 kubelet[2674]: E0213 15:52:12.742643 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.742653 kubelet[2674]: W0213 15:52:12.742651 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.742699 kubelet[2674]: E0213 15:52:12.742660 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.742844 kubelet[2674]: E0213 15:52:12.742834 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.742868 kubelet[2674]: W0213 15:52:12.742843 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.742868 kubelet[2674]: E0213 15:52:12.742855 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.743057 kubelet[2674]: E0213 15:52:12.743033 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.743057 kubelet[2674]: W0213 15:52:12.743053 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.743104 kubelet[2674]: E0213 15:52:12.743063 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.743261 kubelet[2674]: E0213 15:52:12.743249 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.743261 kubelet[2674]: W0213 15:52:12.743257 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.743331 kubelet[2674]: E0213 15:52:12.743267 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.743458 kubelet[2674]: E0213 15:52:12.743448 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.743458 kubelet[2674]: W0213 15:52:12.743456 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.743503 kubelet[2674]: E0213 15:52:12.743466 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.743636 kubelet[2674]: E0213 15:52:12.743626 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.743636 kubelet[2674]: W0213 15:52:12.743635 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.743690 kubelet[2674]: E0213 15:52:12.743645 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.743821 kubelet[2674]: E0213 15:52:12.743808 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.743821 kubelet[2674]: W0213 15:52:12.743814 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.743867 kubelet[2674]: E0213 15:52:12.743825 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.744062 kubelet[2674]: E0213 15:52:12.744036 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.744062 kubelet[2674]: W0213 15:52:12.744056 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.744122 kubelet[2674]: E0213 15:52:12.744069 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.744255 kubelet[2674]: E0213 15:52:12.744245 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.744255 kubelet[2674]: W0213 15:52:12.744253 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.744320 kubelet[2674]: E0213 15:52:12.744263 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.744447 kubelet[2674]: E0213 15:52:12.744436 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.744447 kubelet[2674]: W0213 15:52:12.744444 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.744497 kubelet[2674]: E0213 15:52:12.744453 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.757988 kubelet[2674]: E0213 15:52:12.757947 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.757988 kubelet[2674]: W0213 15:52:12.757969 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.757988 kubelet[2674]: E0213 15:52:12.757989 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.758260 kubelet[2674]: E0213 15:52:12.758235 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.758260 kubelet[2674]: W0213 15:52:12.758250 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.758350 kubelet[2674]: E0213 15:52:12.758267 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.758591 kubelet[2674]: E0213 15:52:12.758575 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.758591 kubelet[2674]: W0213 15:52:12.758587 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.758685 kubelet[2674]: E0213 15:52:12.758605 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.758932 kubelet[2674]: E0213 15:52:12.758891 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.758932 kubelet[2674]: W0213 15:52:12.758923 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.759034 kubelet[2674]: E0213 15:52:12.758961 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.759316 kubelet[2674]: E0213 15:52:12.759286 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.759371 kubelet[2674]: W0213 15:52:12.759305 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.759371 kubelet[2674]: E0213 15:52:12.759342 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.759604 kubelet[2674]: E0213 15:52:12.759574 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.759604 kubelet[2674]: W0213 15:52:12.759587 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.759604 kubelet[2674]: E0213 15:52:12.759604 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.759893 kubelet[2674]: E0213 15:52:12.759872 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.759893 kubelet[2674]: W0213 15:52:12.759887 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.759985 kubelet[2674]: E0213 15:52:12.759926 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.760152 kubelet[2674]: E0213 15:52:12.760136 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.760152 kubelet[2674]: W0213 15:52:12.760149 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.760576 kubelet[2674]: E0213 15:52:12.760180 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.760576 kubelet[2674]: E0213 15:52:12.760411 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.760576 kubelet[2674]: W0213 15:52:12.760420 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.760576 kubelet[2674]: E0213 15:52:12.760460 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.760719 kubelet[2674]: E0213 15:52:12.760630 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.760719 kubelet[2674]: W0213 15:52:12.760640 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.760719 kubelet[2674]: E0213 15:52:12.760665 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.760966 kubelet[2674]: E0213 15:52:12.760941 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.760966 kubelet[2674]: W0213 15:52:12.760962 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.761069 kubelet[2674]: E0213 15:52:12.760979 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.761212 kubelet[2674]: E0213 15:52:12.761194 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.761212 kubelet[2674]: W0213 15:52:12.761206 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.761304 kubelet[2674]: E0213 15:52:12.761222 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.761446 kubelet[2674]: E0213 15:52:12.761417 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.761446 kubelet[2674]: W0213 15:52:12.761427 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.761446 kubelet[2674]: E0213 15:52:12.761441 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.761659 kubelet[2674]: E0213 15:52:12.761643 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.761659 kubelet[2674]: W0213 15:52:12.761654 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.761743 kubelet[2674]: E0213 15:52:12.761673 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.761907 kubelet[2674]: E0213 15:52:12.761889 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.761907 kubelet[2674]: W0213 15:52:12.761904 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.761974 kubelet[2674]: E0213 15:52:12.761927 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.762231 kubelet[2674]: E0213 15:52:12.762215 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.762272 kubelet[2674]: W0213 15:52:12.762241 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.762272 kubelet[2674]: E0213 15:52:12.762259 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.762654 kubelet[2674]: E0213 15:52:12.762632 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.762654 kubelet[2674]: W0213 15:52:12.762644 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.762747 kubelet[2674]: E0213 15:52:12.762666 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:12.762887 kubelet[2674]: E0213 15:52:12.762870 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:12.762887 kubelet[2674]: W0213 15:52:12.762882 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:12.762971 kubelet[2674]: E0213 15:52:12.762894 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.423529 kubelet[2674]: E0213 15:52:13.423492 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:13.692309 kubelet[2674]: I0213 15:52:13.692177 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:52:13.692924 kubelet[2674]: E0213 15:52:13.692909 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:13.751216 kubelet[2674]: E0213 15:52:13.751182 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.751216 kubelet[2674]: W0213 15:52:13.751201 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.751216 kubelet[2674]: E0213 15:52:13.751219 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.751438 kubelet[2674]: E0213 15:52:13.751427 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.751438 kubelet[2674]: W0213 15:52:13.751434 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.751493 kubelet[2674]: E0213 15:52:13.751445 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.751655 kubelet[2674]: E0213 15:52:13.751639 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.751655 kubelet[2674]: W0213 15:52:13.751649 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.751741 kubelet[2674]: E0213 15:52:13.751661 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.751887 kubelet[2674]: E0213 15:52:13.751866 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.751887 kubelet[2674]: W0213 15:52:13.751877 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.751887 kubelet[2674]: E0213 15:52:13.751887 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.752113 kubelet[2674]: E0213 15:52:13.752101 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.752113 kubelet[2674]: W0213 15:52:13.752110 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.752194 kubelet[2674]: E0213 15:52:13.752120 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.752334 kubelet[2674]: E0213 15:52:13.752313 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.752334 kubelet[2674]: W0213 15:52:13.752323 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.752334 kubelet[2674]: E0213 15:52:13.752334 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.752525 kubelet[2674]: E0213 15:52:13.752512 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.752525 kubelet[2674]: W0213 15:52:13.752522 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.752598 kubelet[2674]: E0213 15:52:13.752533 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.752740 kubelet[2674]: E0213 15:52:13.752721 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.752740 kubelet[2674]: W0213 15:52:13.752733 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.752839 kubelet[2674]: E0213 15:52:13.752747 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.752959 kubelet[2674]: E0213 15:52:13.752944 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.752959 kubelet[2674]: W0213 15:52:13.752955 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.753067 kubelet[2674]: E0213 15:52:13.752967 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.753186 kubelet[2674]: E0213 15:52:13.753172 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.753186 kubelet[2674]: W0213 15:52:13.753182 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.753265 kubelet[2674]: E0213 15:52:13.753195 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.753415 kubelet[2674]: E0213 15:52:13.753401 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.753415 kubelet[2674]: W0213 15:52:13.753411 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.753589 kubelet[2674]: E0213 15:52:13.753424 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.753641 kubelet[2674]: E0213 15:52:13.753606 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.753641 kubelet[2674]: W0213 15:52:13.753614 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.753641 kubelet[2674]: E0213 15:52:13.753627 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.753837 kubelet[2674]: E0213 15:52:13.753824 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.753837 kubelet[2674]: W0213 15:52:13.753834 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.753919 kubelet[2674]: E0213 15:52:13.753848 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.754052 kubelet[2674]: E0213 15:52:13.754028 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.754095 kubelet[2674]: W0213 15:52:13.754038 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.754095 kubelet[2674]: E0213 15:52:13.754070 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.754257 kubelet[2674]: E0213 15:52:13.754244 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.754257 kubelet[2674]: W0213 15:52:13.754256 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.754356 kubelet[2674]: E0213 15:52:13.754269 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.764549 kubelet[2674]: E0213 15:52:13.764525 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.764549 kubelet[2674]: W0213 15:52:13.764540 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.764635 kubelet[2674]: E0213 15:52:13.764554 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.764777 kubelet[2674]: E0213 15:52:13.764765 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.764777 kubelet[2674]: W0213 15:52:13.764773 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.764870 kubelet[2674]: E0213 15:52:13.764786 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.765001 kubelet[2674]: E0213 15:52:13.764985 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.765001 kubelet[2674]: W0213 15:52:13.764998 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.765110 kubelet[2674]: E0213 15:52:13.765017 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.765257 kubelet[2674]: E0213 15:52:13.765244 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.765257 kubelet[2674]: W0213 15:52:13.765254 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.765346 kubelet[2674]: E0213 15:52:13.765269 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.765482 kubelet[2674]: E0213 15:52:13.765467 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.765524 kubelet[2674]: W0213 15:52:13.765483 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.765524 kubelet[2674]: E0213 15:52:13.765502 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.765691 kubelet[2674]: E0213 15:52:13.765678 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.765691 kubelet[2674]: W0213 15:52:13.765688 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.765775 kubelet[2674]: E0213 15:52:13.765704 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.765891 kubelet[2674]: E0213 15:52:13.765878 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.765891 kubelet[2674]: W0213 15:52:13.765888 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.765965 kubelet[2674]: E0213 15:52:13.765904 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.766136 kubelet[2674]: E0213 15:52:13.766122 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.766136 kubelet[2674]: W0213 15:52:13.766133 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.766327 kubelet[2674]: E0213 15:52:13.766151 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.766398 kubelet[2674]: E0213 15:52:13.766384 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.766398 kubelet[2674]: W0213 15:52:13.766395 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.766462 kubelet[2674]: E0213 15:52:13.766408 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.766602 kubelet[2674]: E0213 15:52:13.766582 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.766602 kubelet[2674]: W0213 15:52:13.766592 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.766678 kubelet[2674]: E0213 15:52:13.766616 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.766772 kubelet[2674]: E0213 15:52:13.766760 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.766772 kubelet[2674]: W0213 15:52:13.766769 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.766841 kubelet[2674]: E0213 15:52:13.766799 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.766952 kubelet[2674]: E0213 15:52:13.766940 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.766952 kubelet[2674]: W0213 15:52:13.766948 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.767023 kubelet[2674]: E0213 15:52:13.766960 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.767146 kubelet[2674]: E0213 15:52:13.767134 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.767146 kubelet[2674]: W0213 15:52:13.767144 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.767210 kubelet[2674]: E0213 15:52:13.767158 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.767403 kubelet[2674]: E0213 15:52:13.767389 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.767403 kubelet[2674]: W0213 15:52:13.767397 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.767473 kubelet[2674]: E0213 15:52:13.767412 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.767589 kubelet[2674]: E0213 15:52:13.767576 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.767589 kubelet[2674]: W0213 15:52:13.767584 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.767657 kubelet[2674]: E0213 15:52:13.767596 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.767792 kubelet[2674]: E0213 15:52:13.767778 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.767792 kubelet[2674]: W0213 15:52:13.767789 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.767871 kubelet[2674]: E0213 15:52:13.767807 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.767991 kubelet[2674]: E0213 15:52:13.767977 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.767991 kubelet[2674]: W0213 15:52:13.767988 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.768082 kubelet[2674]: E0213 15:52:13.768000 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:13.768687 kubelet[2674]: E0213 15:52:13.768664 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:52:13.768687 kubelet[2674]: W0213 15:52:13.768675 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:52:13.768687 kubelet[2674]: E0213 15:52:13.768688 2674 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:52:14.992104 containerd[1485]: time="2025-02-13T15:52:14.992021135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:14.993442 containerd[1485]: time="2025-02-13T15:52:14.993407287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 15:52:14.994764 containerd[1485]: time="2025-02-13T15:52:14.994729107Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:14.997247 containerd[1485]: time="2025-02-13T15:52:14.997198201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:14.997797 containerd[1485]: time="2025-02-13T15:52:14.997769123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.702117781s" Feb 13 15:52:14.997869 containerd[1485]: time="2025-02-13T15:52:14.997800592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:52:15.004623 containerd[1485]: time="2025-02-13T15:52:15.004513701Z" level=info msg="CreateContainer within sandbox \"8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:52:15.023492 containerd[1485]: time="2025-02-13T15:52:15.023448487Z" level=info msg="CreateContainer within sandbox \"8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca\"" Feb 13 15:52:15.023941 containerd[1485]: time="2025-02-13T15:52:15.023920883Z" level=info msg="StartContainer for \"db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca\"" Feb 13 15:52:15.057270 systemd[1]: Started cri-containerd-db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca.scope - libcontainer container db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca. Feb 13 15:52:15.089415 containerd[1485]: time="2025-02-13T15:52:15.089274998Z" level=info msg="StartContainer for \"db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca\" returns successfully" Feb 13 15:52:15.101474 systemd[1]: cri-containerd-db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca.scope: Deactivated successfully. Feb 13 15:52:15.123515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca-rootfs.mount: Deactivated successfully. Feb 13 15:52:15.422964 kubelet[2674]: E0213 15:52:15.422825 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:15.693590 kubelet[2674]: E0213 15:52:15.693475 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:15.990965 kubelet[2674]: I0213 15:52:15.990923 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-55567fb459-xb5bj" podStartSLOduration=4.307089515 podStartE2EDuration="8.990880555s" podCreationTimestamp="2025-02-13 15:52:07 +0000 UTC" firstStartedPulling="2025-02-13 15:52:07.611587248 +0000 UTC m=+27.266962948" lastFinishedPulling="2025-02-13 15:52:12.295378279 +0000 UTC m=+31.950753988" observedRunningTime="2025-02-13 15:52:12.710487287 +0000 UTC m=+32.365862986" watchObservedRunningTime="2025-02-13 15:52:15.990880555 +0000 UTC m=+35.646256264" Feb 13 15:52:16.099996 containerd[1485]: time="2025-02-13T15:52:16.099906799Z" level=info msg="shim disconnected" id=db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca namespace=k8s.io Feb 13 15:52:16.099996 containerd[1485]: time="2025-02-13T15:52:16.099967934Z" level=warning msg="cleaning up after shim disconnected" id=db5f380502d2e5a77fd80fe358ab7e1100e6e1d30b8ecb91d1c3ce21a3dea8ca namespace=k8s.io Feb 13 15:52:16.099996 containerd[1485]: time="2025-02-13T15:52:16.099979916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:52:16.696412 kubelet[2674]: E0213 15:52:16.696383 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:16.698086 containerd[1485]: time="2025-02-13T15:52:16.697821355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:52:16.876558 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:39210.service - OpenSSH per-connection server daemon (10.0.0.1:39210). Feb 13 15:52:16.958728 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 39210 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:16.960332 sshd-session[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:16.964685 systemd-logind[1471]: New session 11 of user core. Feb 13 15:52:16.972189 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:52:17.083480 sshd[3446]: Connection closed by 10.0.0.1 port 39210 Feb 13 15:52:17.083845 sshd-session[3444]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:17.087698 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:39210.service: Deactivated successfully. Feb 13 15:52:17.089693 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:52:17.090403 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:52:17.091370 systemd-logind[1471]: Removed session 11. Feb 13 15:52:17.423563 kubelet[2674]: E0213 15:52:17.423513 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:19.422598 kubelet[2674]: E0213 15:52:19.422542 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:20.169137 kubelet[2674]: I0213 15:52:20.169075 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:52:20.170008 kubelet[2674]: E0213 15:52:20.169966 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:20.702384 kubelet[2674]: E0213 15:52:20.702338 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:21.423309 kubelet[2674]: E0213 15:52:21.423274 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:22.094814 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:39222.service - OpenSSH per-connection server daemon (10.0.0.1:39222). Feb 13 15:52:22.177154 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:22.178571 sshd-session[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:22.182462 systemd-logind[1471]: New session 12 of user core. Feb 13 15:52:22.191168 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:52:22.426320 sshd[3467]: Connection closed by 10.0.0.1 port 39222 Feb 13 15:52:22.426574 sshd-session[3465]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:22.430023 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:39222.service: Deactivated successfully. Feb 13 15:52:22.432167 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:52:22.432866 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:52:22.433873 systemd-logind[1471]: Removed session 12. Feb 13 15:52:22.754152 containerd[1485]: time="2025-02-13T15:52:22.754103635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:22.756853 containerd[1485]: time="2025-02-13T15:52:22.756799434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:52:22.758450 containerd[1485]: time="2025-02-13T15:52:22.758401560Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:22.760916 containerd[1485]: time="2025-02-13T15:52:22.760888116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:22.761567 containerd[1485]: time="2025-02-13T15:52:22.761528088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.063665846s" Feb 13 15:52:22.761567 containerd[1485]: time="2025-02-13T15:52:22.761555068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:52:22.763515 containerd[1485]: time="2025-02-13T15:52:22.763482896Z" level=info msg="CreateContainer within sandbox \"8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:52:22.777248 containerd[1485]: time="2025-02-13T15:52:22.777208414Z" level=info msg="CreateContainer within sandbox \"8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9\"" Feb 13 15:52:22.777645 containerd[1485]: time="2025-02-13T15:52:22.777614966Z" level=info msg="StartContainer for \"24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9\"" Feb 13 15:52:22.816214 systemd[1]: Started cri-containerd-24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9.scope - libcontainer container 24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9. Feb 13 15:52:22.847958 containerd[1485]: time="2025-02-13T15:52:22.847918769Z" level=info msg="StartContainer for \"24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9\" returns successfully" Feb 13 15:52:23.423219 kubelet[2674]: E0213 15:52:23.423093 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:23.709092 kubelet[2674]: E0213 15:52:23.708956 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:24.163633 containerd[1485]: time="2025-02-13T15:52:24.163580394Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Feb 13 15:52:24.165697 kubelet[2674]: I0213 15:52:24.165663 2674 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:52:24.166666 systemd[1]: cri-containerd-24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9.scope: Deactivated successfully. Feb 13 15:52:24.185181 kubelet[2674]: I0213 15:52:24.185123 2674 topology_manager.go:215] "Topology Admit Handler" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" podNamespace="kube-system" podName="coredns-76f75df574-45d4j" Feb 13 15:52:24.189999 kubelet[2674]: I0213 15:52:24.189968 2674 topology_manager.go:215] "Topology Admit Handler" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" podNamespace="calico-apiserver" podName="calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:24.192804 kubelet[2674]: I0213 15:52:24.191216 2674 topology_manager.go:215] "Topology Admit Handler" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" podNamespace="kube-system" podName="coredns-76f75df574-mlzzh" Feb 13 15:52:24.192804 kubelet[2674]: I0213 15:52:24.191414 2674 topology_manager.go:215] "Topology Admit Handler" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" podNamespace="calico-system" podName="calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:24.192804 kubelet[2674]: I0213 15:52:24.191511 2674 topology_manager.go:215] "Topology Admit Handler" podUID="c19fe500-1919-460e-8572-964852191fc0" podNamespace="calico-apiserver" podName="calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:24.200830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9-rootfs.mount: Deactivated successfully. Feb 13 15:52:24.203580 systemd[1]: Created slice kubepods-burstable-pod19512d1a_36c6_49de_8177_c4d469d03fc5.slice - libcontainer container kubepods-burstable-pod19512d1a_36c6_49de_8177_c4d469d03fc5.slice. Feb 13 15:52:24.211309 systemd[1]: Created slice kubepods-besteffort-pod38b0921d_4d85_4317_86e9_1adbb9d6859a.slice - libcontainer container kubepods-besteffort-pod38b0921d_4d85_4317_86e9_1adbb9d6859a.slice. Feb 13 15:52:24.217391 systemd[1]: Created slice kubepods-burstable-pod08a0764f_6eaa_4b6b_8f68_f508a36d326a.slice - libcontainer container kubepods-burstable-pod08a0764f_6eaa_4b6b_8f68_f508a36d326a.slice. Feb 13 15:52:24.225642 systemd[1]: Created slice kubepods-besteffort-pod6af1b9f5_51e6_4450_99d8_629fc2031232.slice - libcontainer container kubepods-besteffort-pod6af1b9f5_51e6_4450_99d8_629fc2031232.slice. Feb 13 15:52:24.230553 kubelet[2674]: I0213 15:52:24.229992 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af1b9f5-51e6-4450-99d8-629fc2031232-tigera-ca-bundle\") pod \"calico-kube-controllers-68d59db744-jwpsr\" (UID: \"6af1b9f5-51e6-4450-99d8-629fc2031232\") " pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:24.230553 kubelet[2674]: I0213 15:52:24.230020 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8ds\" (UniqueName: \"kubernetes.io/projected/38b0921d-4d85-4317-86e9-1adbb9d6859a-kube-api-access-sp8ds\") pod \"calico-apiserver-7db6857c7b-q5kq7\" (UID: \"38b0921d-4d85-4317-86e9-1adbb9d6859a\") " pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:24.230553 kubelet[2674]: I0213 15:52:24.230092 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqnxd\" (UniqueName: \"kubernetes.io/projected/6af1b9f5-51e6-4450-99d8-629fc2031232-kube-api-access-kqnxd\") pod \"calico-kube-controllers-68d59db744-jwpsr\" (UID: \"6af1b9f5-51e6-4450-99d8-629fc2031232\") " pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:24.230553 kubelet[2674]: I0213 15:52:24.230111 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c19fe500-1919-460e-8572-964852191fc0-calico-apiserver-certs\") pod \"calico-apiserver-7db6857c7b-lpnmw\" (UID: \"c19fe500-1919-460e-8572-964852191fc0\") " pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:24.230553 kubelet[2674]: I0213 15:52:24.230130 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7464g\" (UniqueName: \"kubernetes.io/projected/c19fe500-1919-460e-8572-964852191fc0-kube-api-access-7464g\") pod \"calico-apiserver-7db6857c7b-lpnmw\" (UID: \"c19fe500-1919-460e-8572-964852191fc0\") " pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:24.230844 kubelet[2674]: I0213 15:52:24.230148 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzp2\" (UniqueName: \"kubernetes.io/projected/19512d1a-36c6-49de-8177-c4d469d03fc5-kube-api-access-hgzp2\") pod \"coredns-76f75df574-45d4j\" (UID: \"19512d1a-36c6-49de-8177-c4d469d03fc5\") " pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:24.230844 kubelet[2674]: I0213 15:52:24.230167 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7bxf\" (UniqueName: \"kubernetes.io/projected/08a0764f-6eaa-4b6b-8f68-f508a36d326a-kube-api-access-s7bxf\") pod \"coredns-76f75df574-mlzzh\" (UID: \"08a0764f-6eaa-4b6b-8f68-f508a36d326a\") " pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:24.230844 kubelet[2674]: I0213 15:52:24.230183 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19512d1a-36c6-49de-8177-c4d469d03fc5-config-volume\") pod \"coredns-76f75df574-45d4j\" (UID: \"19512d1a-36c6-49de-8177-c4d469d03fc5\") " pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:24.230844 kubelet[2674]: I0213 15:52:24.230201 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08a0764f-6eaa-4b6b-8f68-f508a36d326a-config-volume\") pod \"coredns-76f75df574-mlzzh\" (UID: \"08a0764f-6eaa-4b6b-8f68-f508a36d326a\") " pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:24.230844 kubelet[2674]: I0213 15:52:24.230222 2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/38b0921d-4d85-4317-86e9-1adbb9d6859a-calico-apiserver-certs\") pod \"calico-apiserver-7db6857c7b-q5kq7\" (UID: \"38b0921d-4d85-4317-86e9-1adbb9d6859a\") " pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:24.232261 systemd[1]: Created slice kubepods-besteffort-podc19fe500_1919_460e_8572_964852191fc0.slice - libcontainer container kubepods-besteffort-podc19fe500_1919_460e_8572_964852191fc0.slice. Feb 13 15:52:24.362655 containerd[1485]: time="2025-02-13T15:52:24.362595361Z" level=info msg="shim disconnected" id=24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9 namespace=k8s.io Feb 13 15:52:24.362655 containerd[1485]: time="2025-02-13T15:52:24.362648450Z" level=warning msg="cleaning up after shim disconnected" id=24c18bd1843df69a4e5696aaf326841a7315cfade6ead76474daa6858e45d0c9 namespace=k8s.io Feb 13 15:52:24.362655 containerd[1485]: time="2025-02-13T15:52:24.362657026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:52:24.509224 kubelet[2674]: E0213 15:52:24.509178 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:24.510087 containerd[1485]: time="2025-02-13T15:52:24.509968947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:0,}" Feb 13 15:52:24.514903 containerd[1485]: time="2025-02-13T15:52:24.514851460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:52:24.520129 kubelet[2674]: E0213 15:52:24.520104 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:24.520439 containerd[1485]: time="2025-02-13T15:52:24.520410302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:0,}" Feb 13 15:52:24.534745 containerd[1485]: time="2025-02-13T15:52:24.534686913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:0,}" Feb 13 15:52:24.535222 containerd[1485]: time="2025-02-13T15:52:24.535175480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:52:24.616513 containerd[1485]: time="2025-02-13T15:52:24.616446731Z" level=error msg="Failed to destroy network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.619907 containerd[1485]: time="2025-02-13T15:52:24.619595991Z" level=error msg="encountered an error cleaning up failed sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.619907 containerd[1485]: time="2025-02-13T15:52:24.619659640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.620114 kubelet[2674]: E0213 15:52:24.619915 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.620114 kubelet[2674]: E0213 15:52:24.619972 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:24.620114 kubelet[2674]: E0213 15:52:24.619991 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:24.621426 kubelet[2674]: E0213 15:52:24.620803 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" Feb 13 15:52:24.623799 containerd[1485]: time="2025-02-13T15:52:24.623759324Z" level=error msg="Failed to destroy network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.624242 containerd[1485]: time="2025-02-13T15:52:24.624195352Z" level=error msg="encountered an error cleaning up failed sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.624291 containerd[1485]: time="2025-02-13T15:52:24.624264432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.624459 kubelet[2674]: E0213 15:52:24.624434 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.624508 kubelet[2674]: E0213 15:52:24.624478 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:24.624508 kubelet[2674]: E0213 15:52:24.624499 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:24.624595 kubelet[2674]: E0213 15:52:24.624543 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-45d4j" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" Feb 13 15:52:24.632186 containerd[1485]: time="2025-02-13T15:52:24.632121967Z" level=error msg="Failed to destroy network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.632711 containerd[1485]: time="2025-02-13T15:52:24.632680485Z" level=error msg="encountered an error cleaning up failed sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.632916 containerd[1485]: time="2025-02-13T15:52:24.632884227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.633301 kubelet[2674]: E0213 15:52:24.633263 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.633398 kubelet[2674]: E0213 15:52:24.633311 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:24.633398 kubelet[2674]: E0213 15:52:24.633334 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:24.633398 kubelet[2674]: E0213 15:52:24.633385 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlzzh" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" Feb 13 15:52:24.651790 containerd[1485]: time="2025-02-13T15:52:24.651732076Z" level=error msg="Failed to destroy network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.654502 containerd[1485]: time="2025-02-13T15:52:24.654433816Z" level=error msg="encountered an error cleaning up failed sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.654618 containerd[1485]: time="2025-02-13T15:52:24.654540868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.654872 kubelet[2674]: E0213 15:52:24.654830 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.654954 kubelet[2674]: E0213 15:52:24.654882 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:24.654954 kubelet[2674]: E0213 15:52:24.654904 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:24.655099 kubelet[2674]: E0213 15:52:24.655076 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" Feb 13 15:52:24.657145 containerd[1485]: time="2025-02-13T15:52:24.657110300Z" level=error msg="Failed to destroy network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.657497 containerd[1485]: time="2025-02-13T15:52:24.657465215Z" level=error msg="encountered an error cleaning up failed sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.657551 containerd[1485]: time="2025-02-13T15:52:24.657525709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.657742 kubelet[2674]: E0213 15:52:24.657710 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.657784 kubelet[2674]: E0213 15:52:24.657748 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:24.657784 kubelet[2674]: E0213 15:52:24.657764 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:24.657838 kubelet[2674]: E0213 15:52:24.657813 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podUID="c19fe500-1919-460e-8572-964852191fc0" Feb 13 15:52:24.711698 kubelet[2674]: I0213 15:52:24.711667 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb" Feb 13 15:52:24.712941 containerd[1485]: time="2025-02-13T15:52:24.712577567Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:24.712941 containerd[1485]: time="2025-02-13T15:52:24.712788904Z" level=info msg="Ensure that sandbox 3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb in task-service has been cleanup successfully" Feb 13 15:52:24.713080 kubelet[2674]: I0213 15:52:24.712698 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282" Feb 13 15:52:24.713159 containerd[1485]: time="2025-02-13T15:52:24.713143359Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:24.713205 containerd[1485]: time="2025-02-13T15:52:24.713193994Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:24.715130 containerd[1485]: time="2025-02-13T15:52:24.714711673Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:24.715130 containerd[1485]: time="2025-02-13T15:52:24.714935402Z" level=info msg="Ensure that sandbox 1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282 in task-service has been cleanup successfully" Feb 13 15:52:24.715279 kubelet[2674]: E0213 15:52:24.715261 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:24.715487 containerd[1485]: time="2025-02-13T15:52:24.715420613Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:24.715487 containerd[1485]: time="2025-02-13T15:52:24.715436323Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:24.716193 containerd[1485]: time="2025-02-13T15:52:24.715661094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:1,}" Feb 13 15:52:24.716811 containerd[1485]: time="2025-02-13T15:52:24.716597401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:52:24.716965 kubelet[2674]: I0213 15:52:24.716942 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b" Feb 13 15:52:24.717635 containerd[1485]: time="2025-02-13T15:52:24.717588310Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:24.717821 containerd[1485]: time="2025-02-13T15:52:24.717803094Z" level=info msg="Ensure that sandbox 3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b in task-service has been cleanup successfully" Feb 13 15:52:24.718069 containerd[1485]: time="2025-02-13T15:52:24.718004872Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:24.718069 containerd[1485]: time="2025-02-13T15:52:24.718030070Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:24.718813 containerd[1485]: time="2025-02-13T15:52:24.718788864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:52:24.719022 kubelet[2674]: I0213 15:52:24.718995 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df" Feb 13 15:52:24.719548 containerd[1485]: time="2025-02-13T15:52:24.719511490Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:24.719713 containerd[1485]: time="2025-02-13T15:52:24.719672923Z" level=info msg="Ensure that sandbox 99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df in task-service has been cleanup successfully" Feb 13 15:52:24.719908 kubelet[2674]: I0213 15:52:24.719858 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895" Feb 13 15:52:24.719957 containerd[1485]: time="2025-02-13T15:52:24.719858400Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:24.719957 containerd[1485]: time="2025-02-13T15:52:24.719877266Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:24.720402 containerd[1485]: time="2025-02-13T15:52:24.720343762Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:24.720548 containerd[1485]: time="2025-02-13T15:52:24.720527126Z" level=info msg="Ensure that sandbox 84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895 in task-service has been cleanup successfully" Feb 13 15:52:24.720594 containerd[1485]: time="2025-02-13T15:52:24.720556410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:1,}" Feb 13 15:52:24.720699 containerd[1485]: time="2025-02-13T15:52:24.720681195Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:24.720699 containerd[1485]: time="2025-02-13T15:52:24.720696794Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:24.720910 kubelet[2674]: E0213 15:52:24.720888 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:24.721140 containerd[1485]: time="2025-02-13T15:52:24.721111082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:1,}" Feb 13 15:52:24.722823 kubelet[2674]: E0213 15:52:24.722133 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:24.722879 containerd[1485]: time="2025-02-13T15:52:24.722785243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:52:24.925286 containerd[1485]: time="2025-02-13T15:52:24.925038907Z" level=error msg="Failed to destroy network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.925790 containerd[1485]: time="2025-02-13T15:52:24.925695599Z" level=error msg="encountered an error cleaning up failed sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.925905 containerd[1485]: time="2025-02-13T15:52:24.925769808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.927407 kubelet[2674]: E0213 15:52:24.926253 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.927407 kubelet[2674]: E0213 15:52:24.926307 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:24.927407 kubelet[2674]: E0213 15:52:24.926328 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:24.927571 kubelet[2674]: E0213 15:52:24.926376 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" Feb 13 15:52:24.942788 containerd[1485]: time="2025-02-13T15:52:24.942724364Z" level=error msg="Failed to destroy network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.944134 containerd[1485]: time="2025-02-13T15:52:24.944078486Z" level=error msg="Failed to destroy network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.944331 containerd[1485]: time="2025-02-13T15:52:24.944246801Z" level=error msg="encountered an error cleaning up failed sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.944435 containerd[1485]: time="2025-02-13T15:52:24.944406761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.944503 containerd[1485]: time="2025-02-13T15:52:24.944475661Z" level=error msg="encountered an error cleaning up failed sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.944580 containerd[1485]: time="2025-02-13T15:52:24.944540423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.944714 kubelet[2674]: E0213 15:52:24.944684 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.944770 kubelet[2674]: E0213 15:52:24.944750 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:24.944795 kubelet[2674]: E0213 15:52:24.944777 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:24.944848 kubelet[2674]: E0213 15:52:24.944834 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlzzh" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" Feb 13 15:52:24.945925 kubelet[2674]: E0213 15:52:24.945907 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.946021 kubelet[2674]: E0213 15:52:24.945941 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:24.946021 kubelet[2674]: E0213 15:52:24.945957 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:24.946195 kubelet[2674]: E0213 15:52:24.946082 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" Feb 13 15:52:24.946258 containerd[1485]: time="2025-02-13T15:52:24.946226206Z" level=error msg="Failed to destroy network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.946798 containerd[1485]: time="2025-02-13T15:52:24.946617861Z" level=error msg="encountered an error cleaning up failed sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.946798 containerd[1485]: time="2025-02-13T15:52:24.946700666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.946956 kubelet[2674]: E0213 15:52:24.946931 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.947014 kubelet[2674]: E0213 15:52:24.946988 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:24.947089 kubelet[2674]: E0213 15:52:24.947020 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:24.947149 kubelet[2674]: E0213 15:52:24.947108 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podUID="c19fe500-1919-460e-8572-964852191fc0" Feb 13 15:52:24.950984 containerd[1485]: time="2025-02-13T15:52:24.950939751Z" level=error msg="Failed to destroy network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.951398 containerd[1485]: time="2025-02-13T15:52:24.951364629Z" level=error msg="encountered an error cleaning up failed sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.951454 containerd[1485]: time="2025-02-13T15:52:24.951427427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.951641 kubelet[2674]: E0213 15:52:24.951622 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:24.951680 kubelet[2674]: E0213 15:52:24.951656 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:24.951680 kubelet[2674]: E0213 15:52:24.951673 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:24.951727 kubelet[2674]: E0213 15:52:24.951714 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-45d4j" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" Feb 13 15:52:25.428789 systemd[1]: Created slice kubepods-besteffort-pod10d7d66d_1867_4427_ba49_4c93c2b786fc.slice - libcontainer container kubepods-besteffort-pod10d7d66d_1867_4427_ba49_4c93c2b786fc.slice. Feb 13 15:52:25.430827 containerd[1485]: time="2025-02-13T15:52:25.430793436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:0,}" Feb 13 15:52:25.490912 containerd[1485]: time="2025-02-13T15:52:25.490844064Z" level=error msg="Failed to destroy network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.491383 containerd[1485]: time="2025-02-13T15:52:25.491345565Z" level=error msg="encountered an error cleaning up failed sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.491452 containerd[1485]: time="2025-02-13T15:52:25.491420164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.491705 kubelet[2674]: E0213 15:52:25.491669 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.491937 kubelet[2674]: E0213 15:52:25.491730 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:25.491937 kubelet[2674]: E0213 15:52:25.491750 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:25.491937 kubelet[2674]: E0213 15:52:25.491807 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:25.493623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e-shm.mount: Deactivated successfully. Feb 13 15:52:25.726218 kubelet[2674]: I0213 15:52:25.725508 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f" Feb 13 15:52:25.726638 containerd[1485]: time="2025-02-13T15:52:25.726114432Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:25.726638 containerd[1485]: time="2025-02-13T15:52:25.726338503Z" level=info msg="Ensure that sandbox 679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f in task-service has been cleanup successfully" Feb 13 15:52:25.726638 containerd[1485]: time="2025-02-13T15:52:25.726549478Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:25.726638 containerd[1485]: time="2025-02-13T15:52:25.726561100Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:25.726818 kubelet[2674]: I0213 15:52:25.726794 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e" Feb 13 15:52:25.727492 containerd[1485]: time="2025-02-13T15:52:25.727319944Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:25.727492 containerd[1485]: time="2025-02-13T15:52:25.727483322Z" level=info msg="Ensure that sandbox b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e in task-service has been cleanup successfully" Feb 13 15:52:25.727637 containerd[1485]: time="2025-02-13T15:52:25.727602134Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:25.727777 containerd[1485]: time="2025-02-13T15:52:25.727763567Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:25.727803 containerd[1485]: time="2025-02-13T15:52:25.727774768Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:25.727991 kubelet[2674]: E0213 15:52:25.727962 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:25.729668 systemd[1]: run-netns-cni\x2da0221b84\x2d3701\x2dd4ef\x2d0287\x2d383b4937f431.mount: Deactivated successfully. Feb 13 15:52:25.729838 containerd[1485]: time="2025-02-13T15:52:25.729788577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:2,}" Feb 13 15:52:25.729937 containerd[1485]: time="2025-02-13T15:52:25.729918771Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:25.730004 containerd[1485]: time="2025-02-13T15:52:25.729973143Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:25.730173 systemd[1]: run-netns-cni\x2d3d668061\x2d3d90\x2d605f\x2da159\x2d91a50065a830.mount: Deactivated successfully. Feb 13 15:52:25.730557 containerd[1485]: time="2025-02-13T15:52:25.730525530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:1,}" Feb 13 15:52:25.730941 kubelet[2674]: I0213 15:52:25.730917 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7" Feb 13 15:52:25.731421 containerd[1485]: time="2025-02-13T15:52:25.731395603Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:25.731596 containerd[1485]: time="2025-02-13T15:52:25.731574349Z" level=info msg="Ensure that sandbox a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7 in task-service has been cleanup successfully" Feb 13 15:52:25.731876 containerd[1485]: time="2025-02-13T15:52:25.731853442Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:25.731876 containerd[1485]: time="2025-02-13T15:52:25.731871125Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:25.732264 containerd[1485]: time="2025-02-13T15:52:25.732234377Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:25.732364 containerd[1485]: time="2025-02-13T15:52:25.732334144Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:25.732364 containerd[1485]: time="2025-02-13T15:52:25.732349753Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:25.732495 kubelet[2674]: E0213 15:52:25.732479 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:25.732632 kubelet[2674]: I0213 15:52:25.732612 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3" Feb 13 15:52:25.732764 containerd[1485]: time="2025-02-13T15:52:25.732739385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:2,}" Feb 13 15:52:25.732949 containerd[1485]: time="2025-02-13T15:52:25.732924061Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:25.733149 containerd[1485]: time="2025-02-13T15:52:25.733124306Z" level=info msg="Ensure that sandbox 84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3 in task-service has been cleanup successfully" Feb 13 15:52:25.733359 containerd[1485]: time="2025-02-13T15:52:25.733312821Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:25.733359 containerd[1485]: time="2025-02-13T15:52:25.733324813Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:25.733596 containerd[1485]: time="2025-02-13T15:52:25.733566978Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:25.733668 containerd[1485]: time="2025-02-13T15:52:25.733650374Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:25.733668 containerd[1485]: time="2025-02-13T15:52:25.733663388Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:25.733877 kubelet[2674]: I0213 15:52:25.733856 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5" Feb 13 15:52:25.734552 containerd[1485]: time="2025-02-13T15:52:25.734058519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:2,}" Feb 13 15:52:25.734552 containerd[1485]: time="2025-02-13T15:52:25.734283683Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:25.734552 containerd[1485]: time="2025-02-13T15:52:25.734411563Z" level=info msg="Ensure that sandbox ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5 in task-service has been cleanup successfully" Feb 13 15:52:25.734772 containerd[1485]: time="2025-02-13T15:52:25.734689243Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:25.734772 containerd[1485]: time="2025-02-13T15:52:25.734709371Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:25.734895 kubelet[2674]: I0213 15:52:25.734858 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e" Feb 13 15:52:25.734994 containerd[1485]: time="2025-02-13T15:52:25.734964531Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:25.735165 containerd[1485]: time="2025-02-13T15:52:25.735143576Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:25.735165 containerd[1485]: time="2025-02-13T15:52:25.735158715Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:25.735371 containerd[1485]: time="2025-02-13T15:52:25.735333152Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:25.735623 containerd[1485]: time="2025-02-13T15:52:25.735599812Z" level=info msg="Ensure that sandbox 24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e in task-service has been cleanup successfully" Feb 13 15:52:25.735672 containerd[1485]: time="2025-02-13T15:52:25.735617695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:52:25.735782 containerd[1485]: time="2025-02-13T15:52:25.735764632Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:25.735782 containerd[1485]: time="2025-02-13T15:52:25.735778678Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:25.736081 containerd[1485]: time="2025-02-13T15:52:25.736058894Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:25.736243 containerd[1485]: time="2025-02-13T15:52:25.736211119Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:25.736243 containerd[1485]: time="2025-02-13T15:52:25.736225306Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:25.736692 containerd[1485]: time="2025-02-13T15:52:25.736567257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:52:25.996261 containerd[1485]: time="2025-02-13T15:52:25.996205133Z" level=error msg="Failed to destroy network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.996931 containerd[1485]: time="2025-02-13T15:52:25.996908053Z" level=error msg="encountered an error cleaning up failed sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.997060 containerd[1485]: time="2025-02-13T15:52:25.997023960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.997346 kubelet[2674]: E0213 15:52:25.997324 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:25.997632 kubelet[2674]: E0213 15:52:25.997619 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:25.997699 kubelet[2674]: E0213 15:52:25.997691 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:25.997799 kubelet[2674]: E0213 15:52:25.997788 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlzzh" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" Feb 13 15:52:26.005727 containerd[1485]: time="2025-02-13T15:52:26.005663011Z" level=error msg="Failed to destroy network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.006428 containerd[1485]: time="2025-02-13T15:52:26.006370058Z" level=error msg="encountered an error cleaning up failed sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.006536 containerd[1485]: time="2025-02-13T15:52:26.006497427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.006874 kubelet[2674]: E0213 15:52:26.006845 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.006948 kubelet[2674]: E0213 15:52:26.006902 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:26.006948 kubelet[2674]: E0213 15:52:26.006924 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:26.007014 kubelet[2674]: E0213 15:52:26.006971 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:26.010688 containerd[1485]: time="2025-02-13T15:52:26.010591891Z" level=error msg="Failed to destroy network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.011065 containerd[1485]: time="2025-02-13T15:52:26.011000807Z" level=error msg="encountered an error cleaning up failed sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.011113 containerd[1485]: time="2025-02-13T15:52:26.011082721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.012214 kubelet[2674]: E0213 15:52:26.011268 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.012214 kubelet[2674]: E0213 15:52:26.011302 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:26.012214 kubelet[2674]: E0213 15:52:26.011321 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:26.012314 kubelet[2674]: E0213 15:52:26.011363 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-45d4j" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" Feb 13 15:52:26.014428 containerd[1485]: time="2025-02-13T15:52:26.014313925Z" level=error msg="Failed to destroy network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.014823 containerd[1485]: time="2025-02-13T15:52:26.014796531Z" level=error msg="encountered an error cleaning up failed sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.014895 containerd[1485]: time="2025-02-13T15:52:26.014848057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.015147 kubelet[2674]: E0213 15:52:26.015023 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.015147 kubelet[2674]: E0213 15:52:26.015089 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:26.015147 kubelet[2674]: E0213 15:52:26.015110 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:26.015239 kubelet[2674]: E0213 15:52:26.015150 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" Feb 13 15:52:26.018759 containerd[1485]: time="2025-02-13T15:52:26.018719583Z" level=error msg="Failed to destroy network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.019397 containerd[1485]: time="2025-02-13T15:52:26.019265937Z" level=error msg="encountered an error cleaning up failed sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.019397 containerd[1485]: time="2025-02-13T15:52:26.019319027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.019606 kubelet[2674]: E0213 15:52:26.019542 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.019606 kubelet[2674]: E0213 15:52:26.019590 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:26.019672 kubelet[2674]: E0213 15:52:26.019613 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:26.019672 kubelet[2674]: E0213 15:52:26.019666 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" Feb 13 15:52:26.020729 containerd[1485]: time="2025-02-13T15:52:26.020680863Z" level=error msg="Failed to destroy network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.021106 containerd[1485]: time="2025-02-13T15:52:26.021077306Z" level=error msg="encountered an error cleaning up failed sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.021156 containerd[1485]: time="2025-02-13T15:52:26.021117031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.021316 kubelet[2674]: E0213 15:52:26.021298 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:26.021396 kubelet[2674]: E0213 15:52:26.021329 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:26.021396 kubelet[2674]: E0213 15:52:26.021350 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:26.021396 kubelet[2674]: E0213 15:52:26.021383 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podUID="c19fe500-1919-460e-8572-964852191fc0" Feb 13 15:52:26.201633 systemd[1]: run-netns-cni\x2dac619ebf\x2d7538\x2dde69\x2d774a\x2d205e341cad09.mount: Deactivated successfully. Feb 13 15:52:26.201740 systemd[1]: run-netns-cni\x2d9ef2309c\x2d95c1\x2d0281\x2dd657\x2d02c53ba2487e.mount: Deactivated successfully. Feb 13 15:52:26.201811 systemd[1]: run-netns-cni\x2d200d6c14\x2dffb1\x2d3fd4\x2d38f7\x2d8b4a857fc1aa.mount: Deactivated successfully. Feb 13 15:52:26.201879 systemd[1]: run-netns-cni\x2dfcbf6d9a\x2d9ee0\x2d47e9\x2d4752\x2d295f57c850a9.mount: Deactivated successfully. Feb 13 15:52:26.746364 kubelet[2674]: I0213 15:52:26.746321 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c" Feb 13 15:52:26.751806 containerd[1485]: time="2025-02-13T15:52:26.751434393Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:26.752375 containerd[1485]: time="2025-02-13T15:52:26.752353439Z" level=info msg="Ensure that sandbox 88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c in task-service has been cleanup successfully" Feb 13 15:52:26.755540 containerd[1485]: time="2025-02-13T15:52:26.755148383Z" level=info msg="TearDown network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" successfully" Feb 13 15:52:26.755540 containerd[1485]: time="2025-02-13T15:52:26.755525801Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" returns successfully" Feb 13 15:52:26.755530 systemd[1]: run-netns-cni\x2dd98306bb\x2dfd56\x2da16e\x2d4170\x2d6cfcbb23e1f9.mount: Deactivated successfully. Feb 13 15:52:26.758894 containerd[1485]: time="2025-02-13T15:52:26.758858365Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:26.759005 containerd[1485]: time="2025-02-13T15:52:26.758986014Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:26.759005 containerd[1485]: time="2025-02-13T15:52:26.758999469Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:26.759131 kubelet[2674]: I0213 15:52:26.759108 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb" Feb 13 15:52:26.761573 containerd[1485]: time="2025-02-13T15:52:26.761543904Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:26.761749 containerd[1485]: time="2025-02-13T15:52:26.761729283Z" level=info msg="Ensure that sandbox ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb in task-service has been cleanup successfully" Feb 13 15:52:26.764158 containerd[1485]: time="2025-02-13T15:52:26.764003139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:2,}" Feb 13 15:52:26.764245 systemd[1]: run-netns-cni\x2debae1cd2\x2df37b\x2dee16\x2d916c\x2d2ad52fdd58f9.mount: Deactivated successfully. Feb 13 15:52:26.764848 containerd[1485]: time="2025-02-13T15:52:26.764748838Z" level=info msg="TearDown network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" successfully" Feb 13 15:52:26.764848 containerd[1485]: time="2025-02-13T15:52:26.764764327Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" returns successfully" Feb 13 15:52:26.765400 containerd[1485]: time="2025-02-13T15:52:26.765373741Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:26.766991 kubelet[2674]: I0213 15:52:26.766946 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a" Feb 13 15:52:26.767460 containerd[1485]: time="2025-02-13T15:52:26.766983432Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:26.767460 containerd[1485]: time="2025-02-13T15:52:26.767000424Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:26.767460 containerd[1485]: time="2025-02-13T15:52:26.767433636Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:26.767542 containerd[1485]: time="2025-02-13T15:52:26.767529065Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:26.767668 containerd[1485]: time="2025-02-13T15:52:26.767640034Z" level=info msg="Ensure that sandbox ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a in task-service has been cleanup successfully" Feb 13 15:52:26.768183 containerd[1485]: time="2025-02-13T15:52:26.767932062Z" level=info msg="TearDown network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" successfully" Feb 13 15:52:26.768183 containerd[1485]: time="2025-02-13T15:52:26.767950857Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" returns successfully" Feb 13 15:52:26.768683 containerd[1485]: time="2025-02-13T15:52:26.768627026Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:26.768683 containerd[1485]: time="2025-02-13T15:52:26.768645701Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:26.769776 kubelet[2674]: E0213 15:52:26.769749 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:26.769923 systemd[1]: run-netns-cni\x2d6174f73f\x2d9209\x2d2122\x2d0244\x2dcc4396fd0885.mount: Deactivated successfully. Feb 13 15:52:26.771864 containerd[1485]: time="2025-02-13T15:52:26.771839575Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:26.771935 containerd[1485]: time="2025-02-13T15:52:26.771918733Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:26.771935 containerd[1485]: time="2025-02-13T15:52:26.771928482Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:26.773596 containerd[1485]: time="2025-02-13T15:52:26.773289796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:3,}" Feb 13 15:52:26.773877 containerd[1485]: time="2025-02-13T15:52:26.773849887Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:26.773941 containerd[1485]: time="2025-02-13T15:52:26.773924858Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:26.773941 containerd[1485]: time="2025-02-13T15:52:26.773936269Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:26.775230 containerd[1485]: time="2025-02-13T15:52:26.775197967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:3,}" Feb 13 15:52:26.776606 kubelet[2674]: I0213 15:52:26.776576 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e" Feb 13 15:52:26.778246 containerd[1485]: time="2025-02-13T15:52:26.777084687Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:26.778246 containerd[1485]: time="2025-02-13T15:52:26.777535232Z" level=info msg="Ensure that sandbox edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e in task-service has been cleanup successfully" Feb 13 15:52:26.778246 containerd[1485]: time="2025-02-13T15:52:26.778033247Z" level=info msg="TearDown network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" successfully" Feb 13 15:52:26.778246 containerd[1485]: time="2025-02-13T15:52:26.778063304Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" returns successfully" Feb 13 15:52:26.780141 systemd[1]: run-netns-cni\x2d99c22b40\x2d7d79\x2df8ff\x2dc1b9\x2d482d1bd3d7ca.mount: Deactivated successfully. Feb 13 15:52:26.781261 containerd[1485]: time="2025-02-13T15:52:26.781235998Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:26.781350 containerd[1485]: time="2025-02-13T15:52:26.781332038Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:26.781391 containerd[1485]: time="2025-02-13T15:52:26.781349931Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:26.781911 containerd[1485]: time="2025-02-13T15:52:26.781879475Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:26.781984 containerd[1485]: time="2025-02-13T15:52:26.781956079Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:26.781984 containerd[1485]: time="2025-02-13T15:52:26.781976547Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:26.782470 containerd[1485]: time="2025-02-13T15:52:26.782438594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:52:26.782939 kubelet[2674]: I0213 15:52:26.782902 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70" Feb 13 15:52:26.783331 containerd[1485]: time="2025-02-13T15:52:26.783309138Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:26.783505 containerd[1485]: time="2025-02-13T15:52:26.783487302Z" level=info msg="Ensure that sandbox b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70 in task-service has been cleanup successfully" Feb 13 15:52:26.783800 containerd[1485]: time="2025-02-13T15:52:26.783777938Z" level=info msg="TearDown network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" successfully" Feb 13 15:52:26.783800 containerd[1485]: time="2025-02-13T15:52:26.783798356Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" returns successfully" Feb 13 15:52:26.785701 containerd[1485]: time="2025-02-13T15:52:26.785668876Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:26.785772 containerd[1485]: time="2025-02-13T15:52:26.785754757Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:26.785805 containerd[1485]: time="2025-02-13T15:52:26.785771057Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:26.786333 containerd[1485]: time="2025-02-13T15:52:26.786309498Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:26.786397 containerd[1485]: time="2025-02-13T15:52:26.786381694Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:26.786444 containerd[1485]: time="2025-02-13T15:52:26.786395319Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:26.786654 kubelet[2674]: E0213 15:52:26.786619 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:26.787009 containerd[1485]: time="2025-02-13T15:52:26.786988643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:3,}" Feb 13 15:52:26.787986 kubelet[2674]: I0213 15:52:26.787953 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e" Feb 13 15:52:26.788601 containerd[1485]: time="2025-02-13T15:52:26.788580310Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:26.788740 containerd[1485]: time="2025-02-13T15:52:26.788722266Z" level=info msg="Ensure that sandbox 26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e in task-service has been cleanup successfully" Feb 13 15:52:26.788985 containerd[1485]: time="2025-02-13T15:52:26.788951837Z" level=info msg="TearDown network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" successfully" Feb 13 15:52:26.788985 containerd[1485]: time="2025-02-13T15:52:26.788976002Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" returns successfully" Feb 13 15:52:26.789220 containerd[1485]: time="2025-02-13T15:52:26.789199952Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:26.789285 containerd[1485]: time="2025-02-13T15:52:26.789270434Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:26.789285 containerd[1485]: time="2025-02-13T15:52:26.789281204Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:26.789497 containerd[1485]: time="2025-02-13T15:52:26.789476451Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:26.789564 containerd[1485]: time="2025-02-13T15:52:26.789548637Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:26.789564 containerd[1485]: time="2025-02-13T15:52:26.789559477Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:26.790108 containerd[1485]: time="2025-02-13T15:52:26.790077939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:52:27.199512 systemd[1]: run-netns-cni\x2d42e6cdcf\x2d88eb\x2df289\x2dfd2e\x2ddfcb6848fd6f.mount: Deactivated successfully. Feb 13 15:52:27.199632 systemd[1]: run-netns-cni\x2d5e052e4b\x2de4c2\x2dbd42\x2d2df6\x2db3320899501c.mount: Deactivated successfully. Feb 13 15:52:27.438136 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:36872.service - OpenSSH per-connection server daemon (10.0.0.1:36872). Feb 13 15:52:27.512629 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 36872 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:27.514583 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:27.527472 systemd-logind[1471]: New session 13 of user core. Feb 13 15:52:27.533222 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:52:27.676899 containerd[1485]: time="2025-02-13T15:52:27.676838926Z" level=error msg="Failed to destroy network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.678351 containerd[1485]: time="2025-02-13T15:52:27.677261589Z" level=error msg="encountered an error cleaning up failed sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.678351 containerd[1485]: time="2025-02-13T15:52:27.677317364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.678508 kubelet[2674]: E0213 15:52:27.678484 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.678569 kubelet[2674]: E0213 15:52:27.678554 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:27.678600 kubelet[2674]: E0213 15:52:27.678581 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:27.678653 kubelet[2674]: E0213 15:52:27.678639 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" Feb 13 15:52:27.684282 sshd[4202]: Connection closed by 10.0.0.1 port 36872 Feb 13 15:52:27.682791 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:27.688994 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:36872.service: Deactivated successfully. Feb 13 15:52:27.689272 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:52:27.692634 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:52:27.694578 systemd-logind[1471]: Removed session 13. Feb 13 15:52:27.701838 containerd[1485]: time="2025-02-13T15:52:27.701088982Z" level=error msg="Failed to destroy network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.701838 containerd[1485]: time="2025-02-13T15:52:27.701810557Z" level=error msg="encountered an error cleaning up failed sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.702015 containerd[1485]: time="2025-02-13T15:52:27.701876280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.702209 kubelet[2674]: E0213 15:52:27.702173 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.702323 kubelet[2674]: E0213 15:52:27.702302 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:27.702374 kubelet[2674]: E0213 15:52:27.702343 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:27.702696 kubelet[2674]: E0213 15:52:27.702410 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podUID="c19fe500-1919-460e-8572-964852191fc0" Feb 13 15:52:27.705254 containerd[1485]: time="2025-02-13T15:52:27.705215606Z" level=error msg="Failed to destroy network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.706100 containerd[1485]: time="2025-02-13T15:52:27.705792910Z" level=error msg="Failed to destroy network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.706179 containerd[1485]: time="2025-02-13T15:52:27.706147164Z" level=error msg="encountered an error cleaning up failed sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.706229 containerd[1485]: time="2025-02-13T15:52:27.706210814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.706465 kubelet[2674]: E0213 15:52:27.706446 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.706560 kubelet[2674]: E0213 15:52:27.706551 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:27.706644 kubelet[2674]: E0213 15:52:27.706627 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:27.706778 kubelet[2674]: E0213 15:52:27.706764 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlzzh" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" Feb 13 15:52:27.717742 containerd[1485]: time="2025-02-13T15:52:27.717701236Z" level=error msg="Failed to destroy network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.718730 containerd[1485]: time="2025-02-13T15:52:27.718702204Z" level=error msg="encountered an error cleaning up failed sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.719469 containerd[1485]: time="2025-02-13T15:52:27.719442053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.719568 containerd[1485]: time="2025-02-13T15:52:27.718757508Z" level=error msg="Failed to destroy network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.719630 containerd[1485]: time="2025-02-13T15:52:27.719530378Z" level=error msg="encountered an error cleaning up failed sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.719675 containerd[1485]: time="2025-02-13T15:52:27.719656415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.719766 kubelet[2674]: E0213 15:52:27.719730 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.719889 kubelet[2674]: E0213 15:52:27.719783 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:27.719889 kubelet[2674]: E0213 15:52:27.719807 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:27.719889 kubelet[2674]: E0213 15:52:27.719856 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" Feb 13 15:52:27.720064 containerd[1485]: time="2025-02-13T15:52:27.719867020Z" level=error msg="encountered an error cleaning up failed sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.720064 containerd[1485]: time="2025-02-13T15:52:27.719908878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.720729 kubelet[2674]: E0213 15:52:27.720708 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.720813 kubelet[2674]: E0213 15:52:27.720741 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:27.720813 kubelet[2674]: E0213 15:52:27.720743 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:27.720923 kubelet[2674]: E0213 15:52:27.720809 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:27.720923 kubelet[2674]: E0213 15:52:27.720841 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:27.720923 kubelet[2674]: E0213 15:52:27.720759 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:27.721059 kubelet[2674]: E0213 15:52:27.720896 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-45d4j" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" Feb 13 15:52:27.721059 kubelet[2674]: E0213 15:52:27.720900 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:27.795985 kubelet[2674]: I0213 15:52:27.795863 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158" Feb 13 15:52:27.799616 containerd[1485]: time="2025-02-13T15:52:27.799187837Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" Feb 13 15:52:27.799616 containerd[1485]: time="2025-02-13T15:52:27.799382093Z" level=info msg="Ensure that sandbox ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158 in task-service has been cleanup successfully" Feb 13 15:52:27.800592 containerd[1485]: time="2025-02-13T15:52:27.800353716Z" level=info msg="TearDown network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" successfully" Feb 13 15:52:27.800592 containerd[1485]: time="2025-02-13T15:52:27.800371479Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" returns successfully" Feb 13 15:52:27.800758 containerd[1485]: time="2025-02-13T15:52:27.800727296Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:27.800869 containerd[1485]: time="2025-02-13T15:52:27.800832063Z" level=info msg="TearDown network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" successfully" Feb 13 15:52:27.800869 containerd[1485]: time="2025-02-13T15:52:27.800848093Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" returns successfully" Feb 13 15:52:27.801398 containerd[1485]: time="2025-02-13T15:52:27.801373018Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:27.801472 containerd[1485]: time="2025-02-13T15:52:27.801454271Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:27.801501 containerd[1485]: time="2025-02-13T15:52:27.801469359Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:27.801604 kubelet[2674]: I0213 15:52:27.801552 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4" Feb 13 15:52:27.801830 containerd[1485]: time="2025-02-13T15:52:27.801807784Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:27.801969 containerd[1485]: time="2025-02-13T15:52:27.801942668Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:27.801969 containerd[1485]: time="2025-02-13T15:52:27.801958217Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:27.802156 containerd[1485]: time="2025-02-13T15:52:27.802083181Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" Feb 13 15:52:27.802180 kubelet[2674]: E0213 15:52:27.802105 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:27.802299 containerd[1485]: time="2025-02-13T15:52:27.802276313Z" level=info msg="Ensure that sandbox 974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4 in task-service has been cleanup successfully" Feb 13 15:52:27.802606 containerd[1485]: time="2025-02-13T15:52:27.802584852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:4,}" Feb 13 15:52:27.803076 containerd[1485]: time="2025-02-13T15:52:27.803032473Z" level=info msg="TearDown network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" successfully" Feb 13 15:52:27.803076 containerd[1485]: time="2025-02-13T15:52:27.803066536Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" returns successfully" Feb 13 15:52:27.803483 containerd[1485]: time="2025-02-13T15:52:27.803464362Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:27.803548 containerd[1485]: time="2025-02-13T15:52:27.803534634Z" level=info msg="TearDown network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" successfully" Feb 13 15:52:27.803578 containerd[1485]: time="2025-02-13T15:52:27.803546346Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" returns successfully" Feb 13 15:52:27.803907 containerd[1485]: time="2025-02-13T15:52:27.803785075Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:27.803907 containerd[1485]: time="2025-02-13T15:52:27.803856629Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:27.803907 containerd[1485]: time="2025-02-13T15:52:27.803869163Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:27.804097 containerd[1485]: time="2025-02-13T15:52:27.804076121Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:27.804166 containerd[1485]: time="2025-02-13T15:52:27.804148416Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:27.804166 containerd[1485]: time="2025-02-13T15:52:27.804159337Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:27.804785 kubelet[2674]: I0213 15:52:27.804769 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe" Feb 13 15:52:27.805387 containerd[1485]: time="2025-02-13T15:52:27.805070587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:52:27.805387 containerd[1485]: time="2025-02-13T15:52:27.805092929Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" Feb 13 15:52:27.805387 containerd[1485]: time="2025-02-13T15:52:27.805274260Z" level=info msg="Ensure that sandbox a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe in task-service has been cleanup successfully" Feb 13 15:52:27.805608 containerd[1485]: time="2025-02-13T15:52:27.805540349Z" level=info msg="TearDown network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" successfully" Feb 13 15:52:27.805663 containerd[1485]: time="2025-02-13T15:52:27.805650986Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" returns successfully" Feb 13 15:52:27.806115 containerd[1485]: time="2025-02-13T15:52:27.806087224Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:27.806352 containerd[1485]: time="2025-02-13T15:52:27.806167024Z" level=info msg="TearDown network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" successfully" Feb 13 15:52:27.806352 containerd[1485]: time="2025-02-13T15:52:27.806175510Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" returns successfully" Feb 13 15:52:27.806574 containerd[1485]: time="2025-02-13T15:52:27.806548771Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:27.806625 containerd[1485]: time="2025-02-13T15:52:27.806612410Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:27.806678 containerd[1485]: time="2025-02-13T15:52:27.806623381Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:27.807040 containerd[1485]: time="2025-02-13T15:52:27.807020876Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:27.807040 containerd[1485]: time="2025-02-13T15:52:27.807105255Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:27.807040 containerd[1485]: time="2025-02-13T15:52:27.807113962Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:27.807498 containerd[1485]: time="2025-02-13T15:52:27.807462886Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" Feb 13 15:52:27.807527 kubelet[2674]: I0213 15:52:27.807144 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4" Feb 13 15:52:27.807527 kubelet[2674]: E0213 15:52:27.807278 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:27.807606 containerd[1485]: time="2025-02-13T15:52:27.807588963Z" level=info msg="Ensure that sandbox df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4 in task-service has been cleanup successfully" Feb 13 15:52:27.807742 containerd[1485]: time="2025-02-13T15:52:27.807720660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:4,}" Feb 13 15:52:27.808013 containerd[1485]: time="2025-02-13T15:52:27.807989585Z" level=info msg="TearDown network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" successfully" Feb 13 15:52:27.808013 containerd[1485]: time="2025-02-13T15:52:27.808006326Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" returns successfully" Feb 13 15:52:27.808518 containerd[1485]: time="2025-02-13T15:52:27.808483070Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:27.808598 containerd[1485]: time="2025-02-13T15:52:27.808572518Z" level=info msg="TearDown network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" successfully" Feb 13 15:52:27.808598 containerd[1485]: time="2025-02-13T15:52:27.808591363Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" returns successfully" Feb 13 15:52:27.808923 containerd[1485]: time="2025-02-13T15:52:27.808899262Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:27.809113 containerd[1485]: time="2025-02-13T15:52:27.808978711Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:27.809113 containerd[1485]: time="2025-02-13T15:52:27.808989942Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:27.809329 containerd[1485]: time="2025-02-13T15:52:27.809301416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:3,}" Feb 13 15:52:27.809825 kubelet[2674]: I0213 15:52:27.809810 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782" Feb 13 15:52:27.810153 containerd[1485]: time="2025-02-13T15:52:27.810119692Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" Feb 13 15:52:27.810282 containerd[1485]: time="2025-02-13T15:52:27.810255256Z" level=info msg="Ensure that sandbox 89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782 in task-service has been cleanup successfully" Feb 13 15:52:27.810719 containerd[1485]: time="2025-02-13T15:52:27.810601486Z" level=info msg="TearDown network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" successfully" Feb 13 15:52:27.810719 containerd[1485]: time="2025-02-13T15:52:27.810617696Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" returns successfully" Feb 13 15:52:27.811094 containerd[1485]: time="2025-02-13T15:52:27.811061148Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:27.811473 containerd[1485]: time="2025-02-13T15:52:27.811409282Z" level=info msg="TearDown network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" successfully" Feb 13 15:52:27.811473 containerd[1485]: time="2025-02-13T15:52:27.811446061Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" returns successfully" Feb 13 15:52:27.811758 containerd[1485]: time="2025-02-13T15:52:27.811737177Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:27.811881 containerd[1485]: time="2025-02-13T15:52:27.811863354Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:27.811881 containerd[1485]: time="2025-02-13T15:52:27.811878121Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:27.812088 containerd[1485]: time="2025-02-13T15:52:27.812059952Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:27.812117 kubelet[2674]: I0213 15:52:27.812073 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22" Feb 13 15:52:27.812150 containerd[1485]: time="2025-02-13T15:52:27.812131688Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:27.812150 containerd[1485]: time="2025-02-13T15:52:27.812141556Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:27.812503 containerd[1485]: time="2025-02-13T15:52:27.812453762Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" Feb 13 15:52:27.812612 containerd[1485]: time="2025-02-13T15:52:27.812592712Z" level=info msg="Ensure that sandbox 115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22 in task-service has been cleanup successfully" Feb 13 15:52:27.812825 containerd[1485]: time="2025-02-13T15:52:27.812802055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:4,}" Feb 13 15:52:27.813183 containerd[1485]: time="2025-02-13T15:52:27.813162492Z" level=info msg="TearDown network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" successfully" Feb 13 15:52:27.813183 containerd[1485]: time="2025-02-13T15:52:27.813178372Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" returns successfully" Feb 13 15:52:27.813416 containerd[1485]: time="2025-02-13T15:52:27.813390069Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:27.813482 containerd[1485]: time="2025-02-13T15:52:27.813465550Z" level=info msg="TearDown network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" successfully" Feb 13 15:52:27.813482 containerd[1485]: time="2025-02-13T15:52:27.813478324Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" returns successfully" Feb 13 15:52:27.813665 containerd[1485]: time="2025-02-13T15:52:27.813647982Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:27.813725 containerd[1485]: time="2025-02-13T15:52:27.813710860Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:27.813725 containerd[1485]: time="2025-02-13T15:52:27.813722242Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:27.813964 containerd[1485]: time="2025-02-13T15:52:27.813916848Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:27.814037 containerd[1485]: time="2025-02-13T15:52:27.814016454Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:27.814092 containerd[1485]: time="2025-02-13T15:52:27.814034759Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:27.814482 containerd[1485]: time="2025-02-13T15:52:27.814448054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:52:28.091017 containerd[1485]: time="2025-02-13T15:52:28.090823440Z" level=error msg="Failed to destroy network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.093552 containerd[1485]: time="2025-02-13T15:52:28.091655862Z" level=error msg="encountered an error cleaning up failed sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.093552 containerd[1485]: time="2025-02-13T15:52:28.091717608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.093669 kubelet[2674]: E0213 15:52:28.093403 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.093669 kubelet[2674]: E0213 15:52:28.093465 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:28.093669 kubelet[2674]: E0213 15:52:28.093488 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:28.093799 kubelet[2674]: E0213 15:52:28.093550 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" Feb 13 15:52:28.183882 containerd[1485]: time="2025-02-13T15:52:28.183312367Z" level=error msg="Failed to destroy network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.184758 containerd[1485]: time="2025-02-13T15:52:28.184281385Z" level=error msg="encountered an error cleaning up failed sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.184758 containerd[1485]: time="2025-02-13T15:52:28.184337370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.184879 kubelet[2674]: E0213 15:52:28.184562 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.184879 kubelet[2674]: E0213 15:52:28.184620 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:28.184879 kubelet[2674]: E0213 15:52:28.184695 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:28.185748 kubelet[2674]: E0213 15:52:28.185543 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podUID="c19fe500-1919-460e-8572-964852191fc0" Feb 13 15:52:28.186953 containerd[1485]: time="2025-02-13T15:52:28.186801024Z" level=error msg="Failed to destroy network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.187382 containerd[1485]: time="2025-02-13T15:52:28.187361024Z" level=error msg="encountered an error cleaning up failed sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.187486 containerd[1485]: time="2025-02-13T15:52:28.187469768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.187691 kubelet[2674]: E0213 15:52:28.187673 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.187796 kubelet[2674]: E0213 15:52:28.187785 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:28.187883 kubelet[2674]: E0213 15:52:28.187874 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:28.188093 kubelet[2674]: E0213 15:52:28.188022 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-45d4j" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" Feb 13 15:52:28.198365 containerd[1485]: time="2025-02-13T15:52:28.198339606Z" level=error msg="Failed to destroy network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.202010 containerd[1485]: time="2025-02-13T15:52:28.201961632Z" level=error msg="encountered an error cleaning up failed sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.202078 containerd[1485]: time="2025-02-13T15:52:28.202057081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.203380 kubelet[2674]: E0213 15:52:28.202361 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.203380 kubelet[2674]: E0213 15:52:28.202417 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:28.203380 kubelet[2674]: E0213 15:52:28.202438 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:28.203500 kubelet[2674]: E0213 15:52:28.202479 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlzzh" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" Feb 13 15:52:28.205054 systemd[1]: run-netns-cni\x2d9367345e\x2d264e\x2dc52a\x2dc253\x2d659c0f5a91c4.mount: Deactivated successfully. Feb 13 15:52:28.205158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22-shm.mount: Deactivated successfully. Feb 13 15:52:28.205233 systemd[1]: run-netns-cni\x2d4452de60\x2d3b18\x2d031a\x2d1ae6\x2d64153fc7111f.mount: Deactivated successfully. Feb 13 15:52:28.205305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4-shm.mount: Deactivated successfully. Feb 13 15:52:28.213444 containerd[1485]: time="2025-02-13T15:52:28.213399867Z" level=error msg="Failed to destroy network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.213782 containerd[1485]: time="2025-02-13T15:52:28.213749062Z" level=error msg="encountered an error cleaning up failed sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.213822 containerd[1485]: time="2025-02-13T15:52:28.213807291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.215461 kubelet[2674]: E0213 15:52:28.215267 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.215461 kubelet[2674]: E0213 15:52:28.215328 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:28.215461 kubelet[2674]: E0213 15:52:28.215353 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:28.215631 kubelet[2674]: E0213 15:52:28.215406 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:28.217397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800-shm.mount: Deactivated successfully. Feb 13 15:52:28.238624 containerd[1485]: time="2025-02-13T15:52:28.238571782Z" level=error msg="Failed to destroy network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.240351 containerd[1485]: time="2025-02-13T15:52:28.240305275Z" level=error msg="encountered an error cleaning up failed sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.240498 containerd[1485]: time="2025-02-13T15:52:28.240363214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.241564 kubelet[2674]: E0213 15:52:28.241537 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:28.241623 kubelet[2674]: E0213 15:52:28.241590 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:28.241623 kubelet[2674]: E0213 15:52:28.241611 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:28.241676 kubelet[2674]: E0213 15:52:28.241670 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" Feb 13 15:52:28.241667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f-shm.mount: Deactivated successfully. Feb 13 15:52:28.816758 kubelet[2674]: I0213 15:52:28.816726 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87" Feb 13 15:52:28.817418 containerd[1485]: time="2025-02-13T15:52:28.817381208Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\"" Feb 13 15:52:28.817695 containerd[1485]: time="2025-02-13T15:52:28.817602303Z" level=info msg="Ensure that sandbox 60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87 in task-service has been cleanup successfully" Feb 13 15:52:28.819066 containerd[1485]: time="2025-02-13T15:52:28.818488045Z" level=info msg="TearDown network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" successfully" Feb 13 15:52:28.819066 containerd[1485]: time="2025-02-13T15:52:28.818517640Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" returns successfully" Feb 13 15:52:28.819583 containerd[1485]: time="2025-02-13T15:52:28.819532275Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" Feb 13 15:52:28.819638 containerd[1485]: time="2025-02-13T15:52:28.819616533Z" level=info msg="TearDown network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" successfully" Feb 13 15:52:28.819638 containerd[1485]: time="2025-02-13T15:52:28.819626923Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" returns successfully" Feb 13 15:52:28.820794 systemd[1]: run-netns-cni\x2db4d62af7\x2dd71b\x2df2ea\x2dc69b\x2da15ceaef944a.mount: Deactivated successfully. Feb 13 15:52:28.821829 containerd[1485]: time="2025-02-13T15:52:28.821779301Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:28.822010 containerd[1485]: time="2025-02-13T15:52:28.821927018Z" level=info msg="TearDown network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" successfully" Feb 13 15:52:28.822010 containerd[1485]: time="2025-02-13T15:52:28.821948839Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" returns successfully" Feb 13 15:52:28.822637 containerd[1485]: time="2025-02-13T15:52:28.822498150Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:28.822637 containerd[1485]: time="2025-02-13T15:52:28.822577148Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:28.822637 containerd[1485]: time="2025-02-13T15:52:28.822587037Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:28.822839 kubelet[2674]: I0213 15:52:28.822803 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f" Feb 13 15:52:28.823696 containerd[1485]: time="2025-02-13T15:52:28.823666733Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:28.823800 containerd[1485]: time="2025-02-13T15:52:28.823779215Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:28.823842 containerd[1485]: time="2025-02-13T15:52:28.823796457Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:28.824058 containerd[1485]: time="2025-02-13T15:52:28.824010639Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\"" Feb 13 15:52:28.824248 containerd[1485]: time="2025-02-13T15:52:28.824225892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:5,}" Feb 13 15:52:28.824379 containerd[1485]: time="2025-02-13T15:52:28.824235881Z" level=info msg="Ensure that sandbox 7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f in task-service has been cleanup successfully" Feb 13 15:52:28.825215 containerd[1485]: time="2025-02-13T15:52:28.825121764Z" level=info msg="TearDown network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" successfully" Feb 13 15:52:28.825215 containerd[1485]: time="2025-02-13T15:52:28.825149435Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" returns successfully" Feb 13 15:52:28.825804 containerd[1485]: time="2025-02-13T15:52:28.825608878Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" Feb 13 15:52:28.825804 containerd[1485]: time="2025-02-13T15:52:28.825713324Z" level=info msg="TearDown network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" successfully" Feb 13 15:52:28.825804 containerd[1485]: time="2025-02-13T15:52:28.825723934Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" returns successfully" Feb 13 15:52:28.826276 containerd[1485]: time="2025-02-13T15:52:28.826246624Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:28.826352 containerd[1485]: time="2025-02-13T15:52:28.826333698Z" level=info msg="TearDown network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" successfully" Feb 13 15:52:28.826352 containerd[1485]: time="2025-02-13T15:52:28.826349318Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" returns successfully" Feb 13 15:52:28.828779 kubelet[2674]: I0213 15:52:28.826535 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1" Feb 13 15:52:28.829198 containerd[1485]: time="2025-02-13T15:52:28.828833679Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:28.829198 containerd[1485]: time="2025-02-13T15:52:28.828944548Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:28.829198 containerd[1485]: time="2025-02-13T15:52:28.828958003Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:28.829284 containerd[1485]: time="2025-02-13T15:52:28.829251634Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\"" Feb 13 15:52:28.829414 containerd[1485]: time="2025-02-13T15:52:28.829388570Z" level=info msg="Ensure that sandbox caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1 in task-service has been cleanup successfully" Feb 13 15:52:28.829625 containerd[1485]: time="2025-02-13T15:52:28.829597021Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:28.829781 containerd[1485]: time="2025-02-13T15:52:28.829757784Z" level=info msg="TearDown network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" successfully" Feb 13 15:52:28.829781 containerd[1485]: time="2025-02-13T15:52:28.829775617Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" returns successfully" Feb 13 15:52:28.830340 containerd[1485]: time="2025-02-13T15:52:28.830271086Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" Feb 13 15:52:28.830391 containerd[1485]: time="2025-02-13T15:52:28.830376254Z" level=info msg="TearDown network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" successfully" Feb 13 15:52:28.830391 containerd[1485]: time="2025-02-13T15:52:28.830386773Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" returns successfully" Feb 13 15:52:28.830443 containerd[1485]: time="2025-02-13T15:52:28.830280173Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:28.831567 containerd[1485]: time="2025-02-13T15:52:28.831495154Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:28.831666 containerd[1485]: time="2025-02-13T15:52:28.831597516Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:28.831693 containerd[1485]: time="2025-02-13T15:52:28.831672897Z" level=info msg="TearDown network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" successfully" Feb 13 15:52:28.831693 containerd[1485]: time="2025-02-13T15:52:28.831682305Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" returns successfully" Feb 13 15:52:28.833506 containerd[1485]: time="2025-02-13T15:52:28.833230039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:52:28.833506 containerd[1485]: time="2025-02-13T15:52:28.833333874Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:28.833506 containerd[1485]: time="2025-02-13T15:52:28.833431347Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:28.833506 containerd[1485]: time="2025-02-13T15:52:28.833445393Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:28.833816 containerd[1485]: time="2025-02-13T15:52:28.833774400Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:28.833937 containerd[1485]: time="2025-02-13T15:52:28.833881491Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:28.833937 containerd[1485]: time="2025-02-13T15:52:28.833893233Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:28.834092 kubelet[2674]: E0213 15:52:28.834069 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:28.834699 containerd[1485]: time="2025-02-13T15:52:28.834659872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:5,}" Feb 13 15:52:28.834935 kubelet[2674]: I0213 15:52:28.834895 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9" Feb 13 15:52:28.835321 containerd[1485]: time="2025-02-13T15:52:28.835297048Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\"" Feb 13 15:52:28.835559 containerd[1485]: time="2025-02-13T15:52:28.835540434Z" level=info msg="Ensure that sandbox 6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9 in task-service has been cleanup successfully" Feb 13 15:52:28.835704 containerd[1485]: time="2025-02-13T15:52:28.835686709Z" level=info msg="TearDown network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" successfully" Feb 13 15:52:28.835704 containerd[1485]: time="2025-02-13T15:52:28.835700875Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" returns successfully" Feb 13 15:52:28.837730 containerd[1485]: time="2025-02-13T15:52:28.837466879Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" Feb 13 15:52:28.837730 containerd[1485]: time="2025-02-13T15:52:28.837541290Z" level=info msg="TearDown network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" successfully" Feb 13 15:52:28.837730 containerd[1485]: time="2025-02-13T15:52:28.837552561Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" returns successfully" Feb 13 15:52:28.838193 containerd[1485]: time="2025-02-13T15:52:28.837848395Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:28.838193 containerd[1485]: time="2025-02-13T15:52:28.837930590Z" level=info msg="TearDown network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" successfully" Feb 13 15:52:28.838193 containerd[1485]: time="2025-02-13T15:52:28.837939506Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" returns successfully" Feb 13 15:52:28.838406 containerd[1485]: time="2025-02-13T15:52:28.838235321Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:28.838406 containerd[1485]: time="2025-02-13T15:52:28.838297398Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:28.838406 containerd[1485]: time="2025-02-13T15:52:28.838305603Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:28.838948 containerd[1485]: time="2025-02-13T15:52:28.838919104Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:28.839034 containerd[1485]: time="2025-02-13T15:52:28.839004154Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:28.839034 containerd[1485]: time="2025-02-13T15:52:28.839018621Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:28.839330 kubelet[2674]: I0213 15:52:28.839310 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800" Feb 13 15:52:28.845852 containerd[1485]: time="2025-02-13T15:52:28.845820395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:52:28.951916 containerd[1485]: time="2025-02-13T15:52:28.951515864Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\"" Feb 13 15:52:28.951916 containerd[1485]: time="2025-02-13T15:52:28.951753921Z" level=info msg="Ensure that sandbox a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800 in task-service has been cleanup successfully" Feb 13 15:52:28.952140 containerd[1485]: time="2025-02-13T15:52:28.952121802Z" level=info msg="TearDown network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" successfully" Feb 13 15:52:28.952140 containerd[1485]: time="2025-02-13T15:52:28.952136860Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" returns successfully" Feb 13 15:52:28.952682 containerd[1485]: time="2025-02-13T15:52:28.952651204Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" Feb 13 15:52:28.952773 containerd[1485]: time="2025-02-13T15:52:28.952749470Z" level=info msg="TearDown network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" successfully" Feb 13 15:52:28.952773 containerd[1485]: time="2025-02-13T15:52:28.952769577Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" returns successfully" Feb 13 15:52:28.953311 containerd[1485]: time="2025-02-13T15:52:28.953090199Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:28.953311 containerd[1485]: time="2025-02-13T15:52:28.953174898Z" level=info msg="TearDown network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" successfully" Feb 13 15:52:28.953311 containerd[1485]: time="2025-02-13T15:52:28.953184676Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" returns successfully" Feb 13 15:52:28.956672 containerd[1485]: time="2025-02-13T15:52:28.956635752Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:28.956760 containerd[1485]: time="2025-02-13T15:52:28.956734617Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:28.957087 containerd[1485]: time="2025-02-13T15:52:28.956959530Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:28.957132 kubelet[2674]: I0213 15:52:28.957006 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0" Feb 13 15:52:28.958454 containerd[1485]: time="2025-02-13T15:52:28.958195128Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\"" Feb 13 15:52:28.958454 containerd[1485]: time="2025-02-13T15:52:28.958322277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:4,}" Feb 13 15:52:28.958454 containerd[1485]: time="2025-02-13T15:52:28.958407306Z" level=info msg="Ensure that sandbox 81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0 in task-service has been cleanup successfully" Feb 13 15:52:28.958646 containerd[1485]: time="2025-02-13T15:52:28.958613994Z" level=info msg="TearDown network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" successfully" Feb 13 15:52:28.958646 containerd[1485]: time="2025-02-13T15:52:28.958628783Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" returns successfully" Feb 13 15:52:28.959402 containerd[1485]: time="2025-02-13T15:52:28.959157294Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" Feb 13 15:52:28.975185 containerd[1485]: time="2025-02-13T15:52:28.959493815Z" level=info msg="TearDown network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" successfully" Feb 13 15:52:28.975185 containerd[1485]: time="2025-02-13T15:52:28.975179370Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" returns successfully" Feb 13 15:52:28.975594 containerd[1485]: time="2025-02-13T15:52:28.975568059Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:28.975700 containerd[1485]: time="2025-02-13T15:52:28.975678846Z" level=info msg="TearDown network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" successfully" Feb 13 15:52:28.975700 containerd[1485]: time="2025-02-13T15:52:28.975694135Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" returns successfully" Feb 13 15:52:28.975976 containerd[1485]: time="2025-02-13T15:52:28.975918036Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:28.976011 containerd[1485]: time="2025-02-13T15:52:28.975989941Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:28.976011 containerd[1485]: time="2025-02-13T15:52:28.975998998Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:28.976319 containerd[1485]: time="2025-02-13T15:52:28.976263764Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:28.976387 containerd[1485]: time="2025-02-13T15:52:28.976370505Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:28.976419 containerd[1485]: time="2025-02-13T15:52:28.976385803Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:28.978073 kubelet[2674]: E0213 15:52:28.976552 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:28.978126 containerd[1485]: time="2025-02-13T15:52:28.976790143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:5,}" Feb 13 15:52:29.200310 systemd[1]: run-netns-cni\x2dfe2b87e2\x2d77ce\x2d17ab\x2dd4ac\x2d3b5bcb37187f.mount: Deactivated successfully. Feb 13 15:52:29.200414 systemd[1]: run-netns-cni\x2dc45df4df\x2d4af1\x2ded5c\x2d005e\x2da6ea74db16dc.mount: Deactivated successfully. Feb 13 15:52:29.200488 systemd[1]: run-netns-cni\x2d37571418\x2d4ff1\x2d7b2c\x2dae04\x2dabbb9df88285.mount: Deactivated successfully. Feb 13 15:52:29.200557 systemd[1]: run-netns-cni\x2d4111b2e7\x2d2795\x2d91ea\x2dc955\x2d2ff25884e315.mount: Deactivated successfully. Feb 13 15:52:29.200627 systemd[1]: run-netns-cni\x2de4c7a962\x2dc287\x2d0ec3\x2d52bb\x2dcb05f4541f0b.mount: Deactivated successfully. Feb 13 15:52:29.838375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555349729.mount: Deactivated successfully. Feb 13 15:52:30.935634 containerd[1485]: time="2025-02-13T15:52:30.935567342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:30.952110 containerd[1485]: time="2025-02-13T15:52:30.951965993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:52:30.962068 containerd[1485]: time="2025-02-13T15:52:30.961574994Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:30.992575 containerd[1485]: time="2025-02-13T15:52:30.992243063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.269417945s" Feb 13 15:52:30.992575 containerd[1485]: time="2025-02-13T15:52:30.992292315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:52:30.992575 containerd[1485]: time="2025-02-13T15:52:30.992379328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:31.006392 containerd[1485]: time="2025-02-13T15:52:31.006350014Z" level=info msg="CreateContainer within sandbox \"8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:52:31.033159 containerd[1485]: time="2025-02-13T15:52:31.033108738Z" level=info msg="CreateContainer within sandbox \"8e9d30cf54d793b42662fb8d74ba7017e3e726a6910ba3b6374ec44ef090e8d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"95e6f99a71d9795bb8108bb5e0c11ed14e5c0796202cbcab4321fefe501221d5\"" Feb 13 15:52:31.036610 containerd[1485]: time="2025-02-13T15:52:31.036543973Z" level=info msg="StartContainer for \"95e6f99a71d9795bb8108bb5e0c11ed14e5c0796202cbcab4321fefe501221d5\"" Feb 13 15:52:31.078742 containerd[1485]: time="2025-02-13T15:52:31.078590603Z" level=error msg="Failed to destroy network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.079274 containerd[1485]: time="2025-02-13T15:52:31.079179178Z" level=error msg="encountered an error cleaning up failed sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.079274 containerd[1485]: time="2025-02-13T15:52:31.079233139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.079804 containerd[1485]: time="2025-02-13T15:52:31.079770888Z" level=error msg="Failed to destroy network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.079849 kubelet[2674]: E0213 15:52:31.079824 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.080212 kubelet[2674]: E0213 15:52:31.079884 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:31.080212 kubelet[2674]: E0213 15:52:31.079905 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:31.080212 kubelet[2674]: E0213 15:52:31.079952 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-45d4j" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" Feb 13 15:52:31.081159 containerd[1485]: time="2025-02-13T15:52:31.080656219Z" level=error msg="encountered an error cleaning up failed sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.081159 containerd[1485]: time="2025-02-13T15:52:31.080729727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.081770 kubelet[2674]: E0213 15:52:31.081745 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.081846 kubelet[2674]: E0213 15:52:31.081785 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:31.081846 kubelet[2674]: E0213 15:52:31.081807 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:31.082212 kubelet[2674]: E0213 15:52:31.081850 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" Feb 13 15:52:31.087673 containerd[1485]: time="2025-02-13T15:52:31.087617852Z" level=error msg="Failed to destroy network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.088904 containerd[1485]: time="2025-02-13T15:52:31.088870584Z" level=error msg="encountered an error cleaning up failed sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.088954 containerd[1485]: time="2025-02-13T15:52:31.088934343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.089212 kubelet[2674]: E0213 15:52:31.089186 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.089253 kubelet[2674]: E0213 15:52:31.089245 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:31.089282 kubelet[2674]: E0213 15:52:31.089267 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:31.089535 kubelet[2674]: E0213 15:52:31.089511 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" Feb 13 15:52:31.091126 containerd[1485]: time="2025-02-13T15:52:31.091083616Z" level=error msg="Failed to destroy network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.092496 containerd[1485]: time="2025-02-13T15:52:31.091843632Z" level=error msg="encountered an error cleaning up failed sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.092496 containerd[1485]: time="2025-02-13T15:52:31.092399466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.092700 kubelet[2674]: E0213 15:52:31.092666 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.092738 kubelet[2674]: E0213 15:52:31.092718 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:31.092766 kubelet[2674]: E0213 15:52:31.092742 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:31.092811 kubelet[2674]: E0213 15:52:31.092791 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlzzh" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" Feb 13 15:52:31.101878 containerd[1485]: time="2025-02-13T15:52:31.101826495Z" level=error msg="Failed to destroy network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.102242 containerd[1485]: time="2025-02-13T15:52:31.102203052Z" level=error msg="encountered an error cleaning up failed sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.102285 containerd[1485]: time="2025-02-13T15:52:31.102249108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.102521 kubelet[2674]: E0213 15:52:31.102487 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.102572 kubelet[2674]: E0213 15:52:31.102541 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:31.102572 kubelet[2674]: E0213 15:52:31.102562 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:31.102651 kubelet[2674]: E0213 15:52:31.102623 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podUID="c19fe500-1919-460e-8572-964852191fc0" Feb 13 15:52:31.108368 containerd[1485]: time="2025-02-13T15:52:31.108323998Z" level=error msg="Failed to destroy network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.108705 containerd[1485]: time="2025-02-13T15:52:31.108675838Z" level=error msg="encountered an error cleaning up failed sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.108758 containerd[1485]: time="2025-02-13T15:52:31.108735019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.109029 kubelet[2674]: E0213 15:52:31.108996 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.109090 kubelet[2674]: E0213 15:52:31.109074 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:31.109118 kubelet[2674]: E0213 15:52:31.109101 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:31.109173 kubelet[2674]: E0213 15:52:31.109159 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:31.150188 systemd[1]: Started cri-containerd-95e6f99a71d9795bb8108bb5e0c11ed14e5c0796202cbcab4321fefe501221d5.scope - libcontainer container 95e6f99a71d9795bb8108bb5e0c11ed14e5c0796202cbcab4321fefe501221d5. Feb 13 15:52:31.161093 kubelet[2674]: I0213 15:52:31.161063 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099" Feb 13 15:52:31.161756 containerd[1485]: time="2025-02-13T15:52:31.161723421Z" level=info msg="StopPodSandbox for \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\"" Feb 13 15:52:31.161940 containerd[1485]: time="2025-02-13T15:52:31.161921983Z" level=info msg="Ensure that sandbox 41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099 in task-service has been cleanup successfully" Feb 13 15:52:31.163891 containerd[1485]: time="2025-02-13T15:52:31.162979437Z" level=info msg="TearDown network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" successfully" Feb 13 15:52:31.163962 containerd[1485]: time="2025-02-13T15:52:31.163892551Z" level=info msg="StopPodSandbox for \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" returns successfully" Feb 13 15:52:31.165298 containerd[1485]: time="2025-02-13T15:52:31.165021169Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\"" Feb 13 15:52:31.165298 containerd[1485]: time="2025-02-13T15:52:31.165179646Z" level=info msg="TearDown network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" successfully" Feb 13 15:52:31.165298 containerd[1485]: time="2025-02-13T15:52:31.165194644Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" returns successfully" Feb 13 15:52:31.165997 containerd[1485]: time="2025-02-13T15:52:31.165697027Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" Feb 13 15:52:31.165997 containerd[1485]: time="2025-02-13T15:52:31.165800121Z" level=info msg="TearDown network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" successfully" Feb 13 15:52:31.165997 containerd[1485]: time="2025-02-13T15:52:31.165814448Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" returns successfully" Feb 13 15:52:31.166406 containerd[1485]: time="2025-02-13T15:52:31.166376261Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:31.166512 containerd[1485]: time="2025-02-13T15:52:31.166480507Z" level=info msg="TearDown network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" successfully" Feb 13 15:52:31.166512 containerd[1485]: time="2025-02-13T15:52:31.166504903Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" returns successfully" Feb 13 15:52:31.167025 containerd[1485]: time="2025-02-13T15:52:31.166998620Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:31.167170 containerd[1485]: time="2025-02-13T15:52:31.167146888Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:31.167170 containerd[1485]: time="2025-02-13T15:52:31.167165943Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:31.168385 kubelet[2674]: I0213 15:52:31.167671 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7" Feb 13 15:52:31.168479 containerd[1485]: time="2025-02-13T15:52:31.168431088Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:31.168561 containerd[1485]: time="2025-02-13T15:52:31.168505257Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:31.168561 containerd[1485]: time="2025-02-13T15:52:31.168514574Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:31.168661 containerd[1485]: time="2025-02-13T15:52:31.168548588Z" level=info msg="StopPodSandbox for \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\"" Feb 13 15:52:31.168731 containerd[1485]: time="2025-02-13T15:52:31.168714940Z" level=info msg="Ensure that sandbox d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7 in task-service has been cleanup successfully" Feb 13 15:52:31.170261 containerd[1485]: time="2025-02-13T15:52:31.170111251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:52:31.172330 kubelet[2674]: I0213 15:52:31.172310 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875" Feb 13 15:52:31.173746 containerd[1485]: time="2025-02-13T15:52:31.173285807Z" level=info msg="StopPodSandbox for \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\"" Feb 13 15:52:31.173746 containerd[1485]: time="2025-02-13T15:52:31.173541887Z" level=info msg="Ensure that sandbox 2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875 in task-service has been cleanup successfully" Feb 13 15:52:31.176440 containerd[1485]: time="2025-02-13T15:52:31.176415000Z" level=info msg="TearDown network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" successfully" Feb 13 15:52:31.176619 containerd[1485]: time="2025-02-13T15:52:31.176599476Z" level=info msg="StopPodSandbox for \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" returns successfully" Feb 13 15:52:31.178244 containerd[1485]: time="2025-02-13T15:52:31.178208225Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\"" Feb 13 15:52:31.178897 containerd[1485]: time="2025-02-13T15:52:31.178877530Z" level=info msg="TearDown network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" successfully" Feb 13 15:52:31.179152 containerd[1485]: time="2025-02-13T15:52:31.179097062Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" returns successfully" Feb 13 15:52:31.180133 containerd[1485]: time="2025-02-13T15:52:31.180098311Z" level=info msg="TearDown network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" successfully" Feb 13 15:52:31.180133 containerd[1485]: time="2025-02-13T15:52:31.180123538Z" level=info msg="StopPodSandbox for \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" returns successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.183219928Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\"" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.183360061Z" level=info msg="TearDown network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.183375179Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" returns successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.183401370Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.183538527Z" level=info msg="TearDown network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.183548746Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" returns successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.184650483Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.184757975Z" level=info msg="TearDown network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.184769626Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" returns successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.184842563Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.184935017Z" level=info msg="TearDown network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" successfully" Feb 13 15:52:31.185228 containerd[1485]: time="2025-02-13T15:52:31.184947911Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" returns successfully" Feb 13 15:52:31.185620 kubelet[2674]: I0213 15:52:31.185257 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b" Feb 13 15:52:31.185901 containerd[1485]: time="2025-02-13T15:52:31.185673052Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:31.185901 containerd[1485]: time="2025-02-13T15:52:31.185765445Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:31.185901 containerd[1485]: time="2025-02-13T15:52:31.185775093Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:31.185901 containerd[1485]: time="2025-02-13T15:52:31.185816881Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:31.185901 containerd[1485]: time="2025-02-13T15:52:31.185889979Z" level=info msg="TearDown network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" successfully" Feb 13 15:52:31.185901 containerd[1485]: time="2025-02-13T15:52:31.185898735Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" returns successfully" Feb 13 15:52:31.188936 containerd[1485]: time="2025-02-13T15:52:31.188863319Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:31.189578 containerd[1485]: time="2025-02-13T15:52:31.189467071Z" level=info msg="StopPodSandbox for \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\"" Feb 13 15:52:31.190643 containerd[1485]: time="2025-02-13T15:52:31.190609696Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:31.190643 containerd[1485]: time="2025-02-13T15:52:31.190639171Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:31.190714 containerd[1485]: time="2025-02-13T15:52:31.189914581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:5,}" Feb 13 15:52:31.191735 containerd[1485]: time="2025-02-13T15:52:31.191390411Z" level=info msg="Ensure that sandbox e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b in task-service has been cleanup successfully" Feb 13 15:52:31.191910 containerd[1485]: time="2025-02-13T15:52:31.191867475Z" level=info msg="TearDown network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" successfully" Feb 13 15:52:31.191962 containerd[1485]: time="2025-02-13T15:52:31.191914113Z" level=info msg="StopPodSandbox for \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" returns successfully" Feb 13 15:52:31.192423 containerd[1485]: time="2025-02-13T15:52:31.192400646Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:31.192602 containerd[1485]: time="2025-02-13T15:52:31.192584501Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:31.192662 containerd[1485]: time="2025-02-13T15:52:31.192648752Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:31.192891 kubelet[2674]: E0213 15:52:31.192864 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:31.192953 containerd[1485]: time="2025-02-13T15:52:31.192819131Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\"" Feb 13 15:52:31.193119 containerd[1485]: time="2025-02-13T15:52:31.193081974Z" level=info msg="TearDown network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" successfully" Feb 13 15:52:31.193169 containerd[1485]: time="2025-02-13T15:52:31.193143440Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" returns successfully" Feb 13 15:52:31.194152 containerd[1485]: time="2025-02-13T15:52:31.193780164Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" Feb 13 15:52:31.194152 containerd[1485]: time="2025-02-13T15:52:31.193888267Z" level=info msg="TearDown network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" successfully" Feb 13 15:52:31.194152 containerd[1485]: time="2025-02-13T15:52:31.193902935Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" returns successfully" Feb 13 15:52:31.194545 containerd[1485]: time="2025-02-13T15:52:31.194528910Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:31.194901 containerd[1485]: time="2025-02-13T15:52:31.194881611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:6,}" Feb 13 15:52:31.195957 containerd[1485]: time="2025-02-13T15:52:31.195030632Z" level=info msg="TearDown network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" successfully" Feb 13 15:52:31.196010 containerd[1485]: time="2025-02-13T15:52:31.195961719Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" returns successfully" Feb 13 15:52:31.197095 kubelet[2674]: I0213 15:52:31.196726 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83" Feb 13 15:52:31.197421 containerd[1485]: time="2025-02-13T15:52:31.197403363Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:31.197972 containerd[1485]: time="2025-02-13T15:52:31.197939981Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:31.197972 containerd[1485]: time="2025-02-13T15:52:31.197963495Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:31.198072 containerd[1485]: time="2025-02-13T15:52:31.197492130Z" level=info msg="StopPodSandbox for \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\"" Feb 13 15:52:31.198207 containerd[1485]: time="2025-02-13T15:52:31.198168419Z" level=info msg="Ensure that sandbox 529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83 in task-service has been cleanup successfully" Feb 13 15:52:31.198759 containerd[1485]: time="2025-02-13T15:52:31.198677635Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:31.198759 containerd[1485]: time="2025-02-13T15:52:31.198755411Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:31.198922 containerd[1485]: time="2025-02-13T15:52:31.198766061Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:31.198922 containerd[1485]: time="2025-02-13T15:52:31.198872170Z" level=info msg="TearDown network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" successfully" Feb 13 15:52:31.198922 containerd[1485]: time="2025-02-13T15:52:31.198883591Z" level=info msg="StopPodSandbox for \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" returns successfully" Feb 13 15:52:31.199617 containerd[1485]: time="2025-02-13T15:52:31.199552487Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\"" Feb 13 15:52:31.199822 containerd[1485]: time="2025-02-13T15:52:31.199630403Z" level=info msg="TearDown network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" successfully" Feb 13 15:52:31.199822 containerd[1485]: time="2025-02-13T15:52:31.199639780Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" returns successfully" Feb 13 15:52:31.199822 containerd[1485]: time="2025-02-13T15:52:31.199730120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:6,}" Feb 13 15:52:31.200453 containerd[1485]: time="2025-02-13T15:52:31.200423812Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" Feb 13 15:52:31.200647 containerd[1485]: time="2025-02-13T15:52:31.200498361Z" level=info msg="TearDown network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" successfully" Feb 13 15:52:31.200647 containerd[1485]: time="2025-02-13T15:52:31.200509252Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" returns successfully" Feb 13 15:52:31.201342 containerd[1485]: time="2025-02-13T15:52:31.201270741Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:31.201390 containerd[1485]: time="2025-02-13T15:52:31.201352624Z" level=info msg="TearDown network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" successfully" Feb 13 15:52:31.201390 containerd[1485]: time="2025-02-13T15:52:31.201361992Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" returns successfully" Feb 13 15:52:31.202038 containerd[1485]: time="2025-02-13T15:52:31.201839868Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:31.202038 containerd[1485]: time="2025-02-13T15:52:31.201956607Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:31.202038 containerd[1485]: time="2025-02-13T15:52:31.201974070Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:31.202594 kubelet[2674]: I0213 15:52:31.202559 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9" Feb 13 15:52:31.204613 containerd[1485]: time="2025-02-13T15:52:31.204471657Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:31.204613 containerd[1485]: time="2025-02-13T15:52:31.204599497Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:31.204700 containerd[1485]: time="2025-02-13T15:52:31.204613152Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:31.206686 containerd[1485]: time="2025-02-13T15:52:31.206646188Z" level=info msg="StopPodSandbox for \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\"" Feb 13 15:52:31.207180 containerd[1485]: time="2025-02-13T15:52:31.207101983Z" level=info msg="Ensure that sandbox 2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9 in task-service has been cleanup successfully" Feb 13 15:52:31.207257 containerd[1485]: time="2025-02-13T15:52:31.206650766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:52:31.211693 containerd[1485]: time="2025-02-13T15:52:31.211600664Z" level=info msg="StartContainer for \"95e6f99a71d9795bb8108bb5e0c11ed14e5c0796202cbcab4321fefe501221d5\" returns successfully" Feb 13 15:52:31.212333 containerd[1485]: time="2025-02-13T15:52:31.212296871Z" level=info msg="TearDown network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" successfully" Feb 13 15:52:31.212436 containerd[1485]: time="2025-02-13T15:52:31.212418098Z" level=info msg="StopPodSandbox for \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" returns successfully" Feb 13 15:52:31.213162 containerd[1485]: time="2025-02-13T15:52:31.213140524Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\"" Feb 13 15:52:31.213752 containerd[1485]: time="2025-02-13T15:52:31.213734920Z" level=info msg="TearDown network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" successfully" Feb 13 15:52:31.213929 containerd[1485]: time="2025-02-13T15:52:31.213912282Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" returns successfully" Feb 13 15:52:31.214287 containerd[1485]: time="2025-02-13T15:52:31.214264733Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" Feb 13 15:52:31.214449 containerd[1485]: time="2025-02-13T15:52:31.214428231Z" level=info msg="TearDown network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" successfully" Feb 13 15:52:31.214652 containerd[1485]: time="2025-02-13T15:52:31.214632364Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" returns successfully" Feb 13 15:52:31.214988 containerd[1485]: time="2025-02-13T15:52:31.214966601Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:31.215311 containerd[1485]: time="2025-02-13T15:52:31.215292282Z" level=info msg="TearDown network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" successfully" Feb 13 15:52:31.215393 containerd[1485]: time="2025-02-13T15:52:31.215376460Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" returns successfully" Feb 13 15:52:31.215678 containerd[1485]: time="2025-02-13T15:52:31.215654962Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:31.215978 containerd[1485]: time="2025-02-13T15:52:31.215957710Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:31.216079 containerd[1485]: time="2025-02-13T15:52:31.216060864Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:31.216409 containerd[1485]: time="2025-02-13T15:52:31.216390323Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:31.216685 containerd[1485]: time="2025-02-13T15:52:31.216668123Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:31.216766 containerd[1485]: time="2025-02-13T15:52:31.216750718Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:31.217489 kubelet[2674]: E0213 15:52:31.217467 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:31.217899 containerd[1485]: time="2025-02-13T15:52:31.217877913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:6,}" Feb 13 15:52:31.263342 containerd[1485]: time="2025-02-13T15:52:31.263115732Z" level=error msg="Failed to destroy network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.263952 containerd[1485]: time="2025-02-13T15:52:31.263821887Z" level=error msg="encountered an error cleaning up failed sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.265563 containerd[1485]: time="2025-02-13T15:52:31.264029807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.265673 kubelet[2674]: E0213 15:52:31.264960 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.265673 kubelet[2674]: E0213 15:52:31.265014 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:31.265673 kubelet[2674]: E0213 15:52:31.265035 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" Feb 13 15:52:31.265788 kubelet[2674]: E0213 15:52:31.265112 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-lpnmw_calico-apiserver(c19fe500-1919-460e-8572-964852191fc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podUID="c19fe500-1919-460e-8572-964852191fc0" Feb 13 15:52:31.307481 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:52:31.307719 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:52:31.328776 containerd[1485]: time="2025-02-13T15:52:31.328583622Z" level=error msg="Failed to destroy network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.330423 containerd[1485]: time="2025-02-13T15:52:31.330369453Z" level=error msg="encountered an error cleaning up failed sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.330475 containerd[1485]: time="2025-02-13T15:52:31.330448461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.331449 kubelet[2674]: E0213 15:52:31.331407 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.331547 kubelet[2674]: E0213 15:52:31.331504 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:31.331547 kubelet[2674]: E0213 15:52:31.331533 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g6vd2" Feb 13 15:52:31.331774 kubelet[2674]: E0213 15:52:31.331612 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g6vd2_calico-system(10d7d66d-1867-4427-ba49-4c93c2b786fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g6vd2" podUID="10d7d66d-1867-4427-ba49-4c93c2b786fc" Feb 13 15:52:31.363151 containerd[1485]: time="2025-02-13T15:52:31.363089852Z" level=error msg="Failed to destroy network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.366102 containerd[1485]: time="2025-02-13T15:52:31.366071797Z" level=error msg="encountered an error cleaning up failed sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.366272 containerd[1485]: time="2025-02-13T15:52:31.366253688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.366627 kubelet[2674]: E0213 15:52:31.366604 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.366770 kubelet[2674]: E0213 15:52:31.366759 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:31.366978 kubelet[2674]: E0213 15:52:31.366875 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mlzzh" Feb 13 15:52:31.367063 kubelet[2674]: E0213 15:52:31.366955 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mlzzh_kube-system(08a0764f-6eaa-4b6b-8f68-f508a36d326a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mlzzh" podUID="08a0764f-6eaa-4b6b-8f68-f508a36d326a" Feb 13 15:52:31.378535 containerd[1485]: time="2025-02-13T15:52:31.378462568Z" level=error msg="Failed to destroy network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.378974 containerd[1485]: time="2025-02-13T15:52:31.378940726Z" level=error msg="encountered an error cleaning up failed sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.379030 containerd[1485]: time="2025-02-13T15:52:31.379009134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.379569 kubelet[2674]: E0213 15:52:31.379534 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.379729 kubelet[2674]: E0213 15:52:31.379703 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:31.379788 kubelet[2674]: E0213 15:52:31.379765 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" Feb 13 15:52:31.379923 kubelet[2674]: E0213 15:52:31.379888 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7db6857c7b-q5kq7_calico-apiserver(38b0921d-4d85-4317-86e9-1adbb9d6859a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podUID="38b0921d-4d85-4317-86e9-1adbb9d6859a" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.404 [INFO][5115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.407 [INFO][5115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" iface="eth0" netns="/var/run/netns/cni-3dc7235c-f490-8bc8-a605-6b6d20441b99" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.408 [INFO][5115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" iface="eth0" netns="/var/run/netns/cni-3dc7235c-f490-8bc8-a605-6b6d20441b99" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.408 [INFO][5115] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" iface="eth0" netns="/var/run/netns/cni-3dc7235c-f490-8bc8-a605-6b6d20441b99" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.408 [INFO][5115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.409 [INFO][5115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.460 [INFO][5146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" HandleID="k8s-pod-network.c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" Workload="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.461 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.461 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.467 [WARNING][5146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" HandleID="k8s-pod-network.c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" Workload="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.468 [INFO][5146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" HandleID="k8s-pod-network.c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" Workload="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.469 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:31.473883 containerd[1485]: 2025-02-13 15:52:31.471 [INFO][5115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d" Feb 13 15:52:31.477001 containerd[1485]: time="2025-02-13T15:52:31.476667218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.477838 kubelet[2674]: E0213 15:52:31.477803 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.477953 kubelet[2674]: E0213 15:52:31.477878 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:31.477953 kubelet[2674]: E0213 15:52:31.477905 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-45d4j" Feb 13 15:52:31.478067 kubelet[2674]: E0213 15:52:31.477966 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-45d4j_kube-system(19512d1a-36c6-49de-8177-c4d469d03fc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c969e8e41fcb45533ea80d38c94e7e0f52c923aa40120f7a045e3d4804f55a0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-45d4j" podUID="19512d1a-36c6-49de-8177-c4d469d03fc5" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.407 [INFO][5124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.407 [INFO][5124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" iface="eth0" netns="/var/run/netns/cni-aca971ae-86ee-e625-a946-e804ccfb7549" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.408 [INFO][5124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" iface="eth0" netns="/var/run/netns/cni-aca971ae-86ee-e625-a946-e804ccfb7549" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.409 [INFO][5124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" iface="eth0" netns="/var/run/netns/cni-aca971ae-86ee-e625-a946-e804ccfb7549" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.409 [INFO][5124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.409 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.463 [INFO][5147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" HandleID="k8s-pod-network.ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" Workload="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.463 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.469 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.473 [WARNING][5147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" HandleID="k8s-pod-network.ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" Workload="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.473 [INFO][5147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" HandleID="k8s-pod-network.ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" Workload="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.475 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:31.481326 containerd[1485]: 2025-02-13 15:52:31.478 [INFO][5124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f" Feb 13 15:52:31.484221 containerd[1485]: time="2025-02-13T15:52:31.484178443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.484398 kubelet[2674]: E0213 15:52:31.484374 2674 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:52:31.484443 kubelet[2674]: E0213 15:52:31.484422 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:31.484477 kubelet[2674]: E0213 15:52:31.484446 2674 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" Feb 13 15:52:31.484534 kubelet[2674]: E0213 15:52:31.484513 2674 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d59db744-jwpsr_calico-system(6af1b9f5-51e6-4450-99d8-629fc2031232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea169c12db37fa59d4850a736aa4615c9e81254c84dad569ecc0818754d6666f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podUID="6af1b9f5-51e6-4450-99d8-629fc2031232" Feb 13 15:52:31.939673 systemd[1]: run-netns-cni\x2dd643976e\x2daf78\x2db639\x2d69d5\x2d59019374de6d.mount: Deactivated successfully. Feb 13 15:52:31.939793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875-shm.mount: Deactivated successfully. Feb 13 15:52:31.939907 systemd[1]: run-netns-cni\x2da0b932e5\x2d340b\x2d605a\x2d4641\x2df61df65e77fa.mount: Deactivated successfully. Feb 13 15:52:31.940002 systemd[1]: run-netns-cni\x2d6ed8ce9f\x2df1d6\x2d4fc1\x2dc777\x2dc20b8c83d040.mount: Deactivated successfully. Feb 13 15:52:31.940128 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7-shm.mount: Deactivated successfully. Feb 13 15:52:31.940236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9-shm.mount: Deactivated successfully. Feb 13 15:52:31.940343 systemd[1]: run-netns-cni\x2d1535918e\x2de305\x2d7e32\x2d709a\x2dd07b35bc6cf4.mount: Deactivated successfully. Feb 13 15:52:31.940442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099-shm.mount: Deactivated successfully. Feb 13 15:52:31.940541 systemd[1]: run-netns-cni\x2d7f1d280e\x2d49be\x2dd29a\x2df35b\x2ddf37749a2e8a.mount: Deactivated successfully. Feb 13 15:52:31.940639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b-shm.mount: Deactivated successfully. Feb 13 15:52:31.940739 systemd[1]: run-netns-cni\x2dfab9b92c\x2debc5\x2d31a8\x2d6fae\x2d8ce033b34e16.mount: Deactivated successfully. Feb 13 15:52:31.940845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83-shm.mount: Deactivated successfully. Feb 13 15:52:32.218371 kubelet[2674]: I0213 15:52:32.218237 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb" Feb 13 15:52:32.218963 containerd[1485]: time="2025-02-13T15:52:32.218902119Z" level=info msg="StopPodSandbox for \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\"" Feb 13 15:52:32.219361 containerd[1485]: time="2025-02-13T15:52:32.219147148Z" level=info msg="Ensure that sandbox e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb in task-service has been cleanup successfully" Feb 13 15:52:32.222643 containerd[1485]: time="2025-02-13T15:52:32.222137840Z" level=info msg="TearDown network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\" successfully" Feb 13 15:52:32.222643 containerd[1485]: time="2025-02-13T15:52:32.222183566Z" level=info msg="StopPodSandbox for \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\" returns successfully" Feb 13 15:52:32.222402 systemd[1]: run-netns-cni\x2d338381dc\x2da69c\x2d8250\x2dcedc\x2daf3e8d11390c.mount: Deactivated successfully. Feb 13 15:52:32.223732 containerd[1485]: time="2025-02-13T15:52:32.223707215Z" level=info msg="StopPodSandbox for \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\"" Feb 13 15:52:32.223813 containerd[1485]: time="2025-02-13T15:52:32.223791894Z" level=info msg="TearDown network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" successfully" Feb 13 15:52:32.223813 containerd[1485]: time="2025-02-13T15:52:32.223808104Z" level=info msg="StopPodSandbox for \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" returns successfully" Feb 13 15:52:32.224121 kubelet[2674]: I0213 15:52:32.224093 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963" Feb 13 15:52:32.224340 containerd[1485]: time="2025-02-13T15:52:32.224292935Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\"" Feb 13 15:52:32.224451 containerd[1485]: time="2025-02-13T15:52:32.224429962Z" level=info msg="TearDown network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" successfully" Feb 13 15:52:32.224476 containerd[1485]: time="2025-02-13T15:52:32.224450380Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" returns successfully" Feb 13 15:52:32.224700 containerd[1485]: time="2025-02-13T15:52:32.224671585Z" level=info msg="StopPodSandbox for \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\"" Feb 13 15:52:32.224926 containerd[1485]: time="2025-02-13T15:52:32.224895745Z" level=info msg="Ensure that sandbox 800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963 in task-service has been cleanup successfully" Feb 13 15:52:32.225499 containerd[1485]: time="2025-02-13T15:52:32.225449484Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" Feb 13 15:52:32.225574 containerd[1485]: time="2025-02-13T15:52:32.225555073Z" level=info msg="TearDown network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" successfully" Feb 13 15:52:32.225574 containerd[1485]: time="2025-02-13T15:52:32.225568598Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" returns successfully" Feb 13 15:52:32.225735 containerd[1485]: time="2025-02-13T15:52:32.225644230Z" level=info msg="TearDown network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\" successfully" Feb 13 15:52:32.225735 containerd[1485]: time="2025-02-13T15:52:32.225714932Z" level=info msg="StopPodSandbox for \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\" returns successfully" Feb 13 15:52:32.226197 containerd[1485]: time="2025-02-13T15:52:32.226003124Z" level=info msg="StopPodSandbox for \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\"" Feb 13 15:52:32.226197 containerd[1485]: time="2025-02-13T15:52:32.226125714Z" level=info msg="TearDown network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" successfully" Feb 13 15:52:32.226197 containerd[1485]: time="2025-02-13T15:52:32.226139099Z" level=info msg="StopPodSandbox for \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" returns successfully" Feb 13 15:52:32.226350 containerd[1485]: time="2025-02-13T15:52:32.226289701Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:32.226731 containerd[1485]: time="2025-02-13T15:52:32.226396602Z" level=info msg="TearDown network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" successfully" Feb 13 15:52:32.226731 containerd[1485]: time="2025-02-13T15:52:32.226419865Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" returns successfully" Feb 13 15:52:32.226731 containerd[1485]: time="2025-02-13T15:52:32.226702926Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\"" Feb 13 15:52:32.226902 containerd[1485]: time="2025-02-13T15:52:32.226801191Z" level=info msg="TearDown network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" successfully" Feb 13 15:52:32.226902 containerd[1485]: time="2025-02-13T15:52:32.226815689Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" returns successfully" Feb 13 15:52:32.227971 containerd[1485]: time="2025-02-13T15:52:32.227116202Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:32.227971 containerd[1485]: time="2025-02-13T15:52:32.227220919Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:32.227971 containerd[1485]: time="2025-02-13T15:52:32.227233622Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:32.227971 containerd[1485]: time="2025-02-13T15:52:32.227628393Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" Feb 13 15:52:32.227971 containerd[1485]: time="2025-02-13T15:52:32.227712992Z" level=info msg="TearDown network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" successfully" Feb 13 15:52:32.227971 containerd[1485]: time="2025-02-13T15:52:32.227722881Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" returns successfully" Feb 13 15:52:32.227971 containerd[1485]: time="2025-02-13T15:52:32.227934538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:6,}" Feb 13 15:52:32.228835 systemd[1]: run-netns-cni\x2db8adcce0\x2dd622\x2dbe6c\x2dbcae\x2d8644870ac161.mount: Deactivated successfully. Feb 13 15:52:32.229147 containerd[1485]: time="2025-02-13T15:52:32.229107619Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:32.229366 containerd[1485]: time="2025-02-13T15:52:32.229237272Z" level=info msg="TearDown network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" successfully" Feb 13 15:52:32.229366 containerd[1485]: time="2025-02-13T15:52:32.229257961Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" returns successfully" Feb 13 15:52:32.229939 containerd[1485]: time="2025-02-13T15:52:32.229645398Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:32.229939 containerd[1485]: time="2025-02-13T15:52:32.229733243Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:32.229939 containerd[1485]: time="2025-02-13T15:52:32.229745947Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:32.230220 containerd[1485]: time="2025-02-13T15:52:32.230191773Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:32.230399 containerd[1485]: time="2025-02-13T15:52:32.230376621Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:32.230399 containerd[1485]: time="2025-02-13T15:52:32.230394324Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:32.231077 containerd[1485]: time="2025-02-13T15:52:32.231001753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:7,}" Feb 13 15:52:32.250927 kubelet[2674]: E0213 15:52:32.250732 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:32.264678 kubelet[2674]: I0213 15:52:32.264621 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed" Feb 13 15:52:32.265346 containerd[1485]: time="2025-02-13T15:52:32.265288682Z" level=info msg="StopPodSandbox for \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\"" Feb 13 15:52:32.265545 containerd[1485]: time="2025-02-13T15:52:32.265517141Z" level=info msg="Ensure that sandbox 6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed in task-service has been cleanup successfully" Feb 13 15:52:32.265868 containerd[1485]: time="2025-02-13T15:52:32.265787868Z" level=info msg="TearDown network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\" successfully" Feb 13 15:52:32.265868 containerd[1485]: time="2025-02-13T15:52:32.265808607Z" level=info msg="StopPodSandbox for \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\" returns successfully" Feb 13 15:52:32.268284 containerd[1485]: time="2025-02-13T15:52:32.268001513Z" level=info msg="StopPodSandbox for \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\"" Feb 13 15:52:32.268284 containerd[1485]: time="2025-02-13T15:52:32.268122079Z" level=info msg="TearDown network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" successfully" Feb 13 15:52:32.268284 containerd[1485]: time="2025-02-13T15:52:32.268135173Z" level=info msg="StopPodSandbox for \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" returns successfully" Feb 13 15:52:32.268590 containerd[1485]: time="2025-02-13T15:52:32.268564599Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\"" Feb 13 15:52:32.268844 containerd[1485]: time="2025-02-13T15:52:32.268758333Z" level=info msg="TearDown network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" successfully" Feb 13 15:52:32.268965 containerd[1485]: time="2025-02-13T15:52:32.268945995Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" returns successfully" Feb 13 15:52:32.269031 kubelet[2674]: I0213 15:52:32.269003 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zpbbd" podStartSLOduration=1.899052371 podStartE2EDuration="25.268949051s" podCreationTimestamp="2025-02-13 15:52:07 +0000 UTC" firstStartedPulling="2025-02-13 15:52:07.623461206 +0000 UTC m=+27.278836915" lastFinishedPulling="2025-02-13 15:52:30.993357886 +0000 UTC m=+50.648733595" observedRunningTime="2025-02-13 15:52:32.26846423 +0000 UTC m=+51.923839939" watchObservedRunningTime="2025-02-13 15:52:32.268949051 +0000 UTC m=+51.924324760" Feb 13 15:52:32.269490 containerd[1485]: time="2025-02-13T15:52:32.269460981Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" Feb 13 15:52:32.271246 containerd[1485]: time="2025-02-13T15:52:32.271223097Z" level=info msg="TearDown network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" successfully" Feb 13 15:52:32.271353 containerd[1485]: time="2025-02-13T15:52:32.271334095Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" returns successfully" Feb 13 15:52:32.272022 systemd[1]: run-netns-cni\x2dfa8cea57\x2d8a4e\x2d6211\x2dc2ea\x2d3f787f15893d.mount: Deactivated successfully. Feb 13 15:52:32.273373 containerd[1485]: time="2025-02-13T15:52:32.272378646Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:32.273373 containerd[1485]: time="2025-02-13T15:52:32.272483794Z" level=info msg="TearDown network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" successfully" Feb 13 15:52:32.273373 containerd[1485]: time="2025-02-13T15:52:32.272496738Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" returns successfully" Feb 13 15:52:32.273373 containerd[1485]: time="2025-02-13T15:52:32.272724816Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:32.273373 containerd[1485]: time="2025-02-13T15:52:32.272815556Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:32.273373 containerd[1485]: time="2025-02-13T15:52:32.272840643Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:32.273568 containerd[1485]: time="2025-02-13T15:52:32.273503857Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:32.273675 containerd[1485]: time="2025-02-13T15:52:32.273608744Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:32.273675 containerd[1485]: time="2025-02-13T15:52:32.273671472Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:32.274982 kubelet[2674]: I0213 15:52:32.274948 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228" Feb 13 15:52:32.275635 containerd[1485]: time="2025-02-13T15:52:32.275588389Z" level=info msg="StopPodSandbox for \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\"" Feb 13 15:52:32.275750 containerd[1485]: time="2025-02-13T15:52:32.275726408Z" level=info msg="TearDown network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" successfully" Feb 13 15:52:32.275750 containerd[1485]: time="2025-02-13T15:52:32.275745754Z" level=info msg="StopPodSandbox for \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" returns successfully" Feb 13 15:52:32.275930 containerd[1485]: time="2025-02-13T15:52:32.275883463Z" level=info msg="StopPodSandbox for \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\"" Feb 13 15:52:32.275993 containerd[1485]: time="2025-02-13T15:52:32.275971608Z" level=info msg="TearDown network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" successfully" Feb 13 15:52:32.275993 containerd[1485]: time="2025-02-13T15:52:32.275988300Z" level=info msg="StopPodSandbox for \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" returns successfully" Feb 13 15:52:32.276078 containerd[1485]: time="2025-02-13T15:52:32.276027303Z" level=info msg="StopPodSandbox for \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\"" Feb 13 15:52:32.276536 containerd[1485]: time="2025-02-13T15:52:32.276250732Z" level=info msg="Ensure that sandbox 2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228 in task-service has been cleanup successfully" Feb 13 15:52:32.276941 containerd[1485]: time="2025-02-13T15:52:32.276906583Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\"" Feb 13 15:52:32.277401 containerd[1485]: time="2025-02-13T15:52:32.277191858Z" level=info msg="TearDown network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" successfully" Feb 13 15:52:32.277401 containerd[1485]: time="2025-02-13T15:52:32.277210813Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" returns successfully" Feb 13 15:52:32.277401 containerd[1485]: time="2025-02-13T15:52:32.277358851Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\"" Feb 13 15:52:32.277526 containerd[1485]: time="2025-02-13T15:52:32.277472745Z" level=info msg="TearDown network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" successfully" Feb 13 15:52:32.277526 containerd[1485]: time="2025-02-13T15:52:32.277485819Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" returns successfully" Feb 13 15:52:32.277945 containerd[1485]: time="2025-02-13T15:52:32.277624280Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" Feb 13 15:52:32.277945 containerd[1485]: time="2025-02-13T15:52:32.277692748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:7,}" Feb 13 15:52:32.277945 containerd[1485]: time="2025-02-13T15:52:32.277711383Z" level=info msg="TearDown network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" successfully" Feb 13 15:52:32.277945 containerd[1485]: time="2025-02-13T15:52:32.277724949Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" returns successfully" Feb 13 15:52:32.277945 containerd[1485]: time="2025-02-13T15:52:32.277835165Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" Feb 13 15:52:32.277945 containerd[1485]: time="2025-02-13T15:52:32.277921156Z" level=info msg="TearDown network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" successfully" Feb 13 15:52:32.277945 containerd[1485]: time="2025-02-13T15:52:32.277934041Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" returns successfully" Feb 13 15:52:32.280200 containerd[1485]: time="2025-02-13T15:52:32.279966023Z" level=info msg="TearDown network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\" successfully" Feb 13 15:52:32.280200 containerd[1485]: time="2025-02-13T15:52:32.280019464Z" level=info msg="StopPodSandbox for \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\" returns successfully" Feb 13 15:52:32.280338 containerd[1485]: time="2025-02-13T15:52:32.280219308Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:32.280725 containerd[1485]: time="2025-02-13T15:52:32.280699460Z" level=info msg="TearDown network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" successfully" Feb 13 15:52:32.280725 containerd[1485]: time="2025-02-13T15:52:32.280721752Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" returns successfully" Feb 13 15:52:32.280847 containerd[1485]: time="2025-02-13T15:52:32.280782726Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:32.281156 containerd[1485]: time="2025-02-13T15:52:32.280877003Z" level=info msg="TearDown network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" successfully" Feb 13 15:52:32.281156 containerd[1485]: time="2025-02-13T15:52:32.280890889Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" returns successfully" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281287884Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281378985Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281390387Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281449658Z" level=info msg="StopPodSandbox for \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\"" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281523346Z" level=info msg="TearDown network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" successfully" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281532233Z" level=info msg="StopPodSandbox for \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" returns successfully" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281563832Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281622201Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:32.281830 containerd[1485]: time="2025-02-13T15:52:32.281630146Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282246904Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282336171Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282349556Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282398388Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282480812Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282492124Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282534082Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\"" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282610696Z" level=info msg="TearDown network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" successfully" Feb 13 15:52:32.282805 containerd[1485]: time="2025-02-13T15:52:32.282622488Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" returns successfully" Feb 13 15:52:32.283379 kubelet[2674]: E0213 15:52:32.283353 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:32.283929 containerd[1485]: time="2025-02-13T15:52:32.283908742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:6,}" Feb 13 15:52:32.285252 containerd[1485]: time="2025-02-13T15:52:32.285206567Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" Feb 13 15:52:32.285394 containerd[1485]: time="2025-02-13T15:52:32.285358873Z" level=info msg="TearDown network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" successfully" Feb 13 15:52:32.285394 containerd[1485]: time="2025-02-13T15:52:32.285381726Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" returns successfully" Feb 13 15:52:32.286033 containerd[1485]: time="2025-02-13T15:52:32.286001249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:6,}" Feb 13 15:52:32.286628 containerd[1485]: time="2025-02-13T15:52:32.286606975Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:32.286863 containerd[1485]: time="2025-02-13T15:52:32.286845222Z" level=info msg="TearDown network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" successfully" Feb 13 15:52:32.286951 containerd[1485]: time="2025-02-13T15:52:32.286934139Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" returns successfully" Feb 13 15:52:32.287396 containerd[1485]: time="2025-02-13T15:52:32.287375968Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:32.287581 containerd[1485]: time="2025-02-13T15:52:32.287564943Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:32.287708 containerd[1485]: time="2025-02-13T15:52:32.287672204Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:32.288675 containerd[1485]: time="2025-02-13T15:52:32.288641844Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:32.288898 containerd[1485]: time="2025-02-13T15:52:32.288859101Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:32.289056 containerd[1485]: time="2025-02-13T15:52:32.288976211Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:32.289492 kubelet[2674]: E0213 15:52:32.289471 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:32.290095 containerd[1485]: time="2025-02-13T15:52:32.289816848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:7,}" Feb 13 15:52:32.644404 systemd-networkd[1423]: cali929261f6647: Link UP Feb 13 15:52:32.645986 systemd-networkd[1423]: cali929261f6647: Gained carrier Feb 13 15:52:32.699288 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:36880.service - OpenSSH per-connection server daemon (10.0.0.1:36880). Feb 13 15:52:32.877223 kernel: bpftool[5336]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:52:32.940947 systemd[1]: run-netns-cni\x2d373d25f8\x2dd57e\x2db882\x2d7890\x2dcdab9d8222c7.mount: Deactivated successfully. Feb 13 15:52:32.963722 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 36880 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:32.965517 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:32.969383 systemd-logind[1471]: New session 14 of user core. Feb 13 15:52:32.979189 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.292 [INFO][5174] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.307 [INFO][5174] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g6vd2-eth0 csi-node-driver- calico-system 10d7d66d-1867-4427-ba49-4c93c2b786fc 619 0 2025-02-13 15:52:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-g6vd2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali929261f6647 [] []}} ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.307 [INFO][5174] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-eth0" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.467 [INFO][5208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" HandleID="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Workload="localhost-k8s-csi--node--driver--g6vd2-eth0" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.481 [INFO][5208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" HandleID="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Workload="localhost-k8s-csi--node--driver--g6vd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g6vd2", "timestamp":"2025-02-13 15:52:32.467194714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.481 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.481 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.481 [INFO][5208] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.482 [INFO][5208] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.486 [INFO][5208] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.489 [INFO][5208] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.491 [INFO][5208] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.492 [INFO][5208] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.492 [INFO][5208] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.494 [INFO][5208] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148 Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.542 [INFO][5208] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.573 [INFO][5208] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.573 [INFO][5208] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" host="localhost" Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.573 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:33.226281 containerd[1485]: 2025-02-13 15:52:32.573 [INFO][5208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" HandleID="k8s-pod-network.093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Workload="localhost-k8s-csi--node--driver--g6vd2-eth0" Feb 13 15:52:33.227257 containerd[1485]: 2025-02-13 15:52:32.577 [INFO][5174] cni-plugin/k8s.go 386: Populated endpoint ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g6vd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10d7d66d-1867-4427-ba49-4c93c2b786fc", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g6vd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali929261f6647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.227257 containerd[1485]: 2025-02-13 15:52:32.577 [INFO][5174] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-eth0" Feb 13 15:52:33.227257 containerd[1485]: 2025-02-13 15:52:32.577 [INFO][5174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali929261f6647 ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-eth0" Feb 13 15:52:33.227257 containerd[1485]: 2025-02-13 15:52:32.920 [INFO][5174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-eth0" Feb 13 15:52:33.227257 containerd[1485]: 2025-02-13 15:52:32.920 [INFO][5174] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g6vd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10d7d66d-1867-4427-ba49-4c93c2b786fc", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148", Pod:"csi-node-driver-g6vd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali929261f6647", MAC:"42:10:0f:2f:e1:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.227257 containerd[1485]: 2025-02-13 15:52:33.224 [INFO][5174] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148" Namespace="calico-system" Pod="csi-node-driver-g6vd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g6vd2-eth0" Feb 13 15:52:33.277306 kubelet[2674]: E0213 15:52:33.277275 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:33.374664 systemd-networkd[1423]: vxlan.calico: Link UP Feb 13 15:52:33.374675 systemd-networkd[1423]: vxlan.calico: Gained carrier Feb 13 15:52:33.558602 systemd-networkd[1423]: cali162d1fe861b: Link UP Feb 13 15:52:33.559561 systemd-networkd[1423]: cali162d1fe861b: Gained carrier Feb 13 15:52:33.584179 sshd[5356]: Connection closed by 10.0.0.1 port 36880 Feb 13 15:52:33.583460 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.298 [INFO][5186] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.306 [INFO][5186] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0 calico-apiserver-7db6857c7b- calico-apiserver 38b0921d-4d85-4317-86e9-1adbb9d6859a 799 0 2025-02-13 15:52:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7db6857c7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7db6857c7b-q5kq7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali162d1fe861b [] []}} ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.307 [INFO][5186] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.462 [INFO][5200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" HandleID="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Workload="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.481 [INFO][5200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" HandleID="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Workload="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7db6857c7b-q5kq7", "timestamp":"2025-02-13 15:52:32.462138185 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.481 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.574 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.575 [INFO][5200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.736 [INFO][5200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:32.833 [INFO][5200] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.224 [INFO][5200] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.256 [INFO][5200] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.258 [INFO][5200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.258 [INFO][5200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.259 [INFO][5200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7 Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.353 [INFO][5200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.548 [INFO][5200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.548 [INFO][5200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" host="localhost" Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.549 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:33.590220 containerd[1485]: 2025-02-13 15:52:33.549 [INFO][5200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" HandleID="k8s-pod-network.55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Workload="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" Feb 13 15:52:33.590846 containerd[1485]: 2025-02-13 15:52:33.552 [INFO][5186] cni-plugin/k8s.go 386: Populated endpoint ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0", GenerateName:"calico-apiserver-7db6857c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"38b0921d-4d85-4317-86e9-1adbb9d6859a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db6857c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7db6857c7b-q5kq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali162d1fe861b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.590846 containerd[1485]: 2025-02-13 15:52:33.552 [INFO][5186] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" Feb 13 15:52:33.590846 containerd[1485]: 2025-02-13 15:52:33.552 [INFO][5186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali162d1fe861b ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" Feb 13 15:52:33.590846 containerd[1485]: 2025-02-13 15:52:33.558 [INFO][5186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" Feb 13 15:52:33.590846 containerd[1485]: 2025-02-13 15:52:33.559 [INFO][5186] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0", GenerateName:"calico-apiserver-7db6857c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"38b0921d-4d85-4317-86e9-1adbb9d6859a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db6857c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7", Pod:"calico-apiserver-7db6857c7b-q5kq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali162d1fe861b", MAC:"2e:5e:8e:35:f8:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.590846 containerd[1485]: 2025-02-13 15:52:33.575 [INFO][5186] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-q5kq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--q5kq7-eth0" Feb 13 15:52:33.594213 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:36880.service: Deactivated successfully. Feb 13 15:52:33.597455 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:52:33.604230 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:52:33.610554 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:36888.service - OpenSSH per-connection server daemon (10.0.0.1:36888). Feb 13 15:52:33.613091 containerd[1485]: time="2025-02-13T15:52:33.612747471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:33.613091 containerd[1485]: time="2025-02-13T15:52:33.612839172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:33.613091 containerd[1485]: time="2025-02-13T15:52:33.612854321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:33.613091 containerd[1485]: time="2025-02-13T15:52:33.612971260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:33.612801 systemd-logind[1471]: Removed session 14. Feb 13 15:52:33.655587 systemd[1]: Started cri-containerd-093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148.scope - libcontainer container 093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148. Feb 13 15:52:33.659344 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 36888 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:33.663740 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:33.670271 systemd-logind[1471]: New session 15 of user core. Feb 13 15:52:33.679297 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:52:33.684217 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:52:33.705145 containerd[1485]: time="2025-02-13T15:52:33.701575664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g6vd2,Uid:10d7d66d-1867-4427-ba49-4c93c2b786fc,Namespace:calico-system,Attempt:6,} returns sandbox id \"093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148\"" Feb 13 15:52:33.705145 containerd[1485]: time="2025-02-13T15:52:33.704344209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:52:33.738202 systemd-networkd[1423]: cali929261f6647: Gained IPv6LL Feb 13 15:52:33.763097 containerd[1485]: time="2025-02-13T15:52:33.762968872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:33.763097 containerd[1485]: time="2025-02-13T15:52:33.763027012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:33.763097 containerd[1485]: time="2025-02-13T15:52:33.763039946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:33.763304 containerd[1485]: time="2025-02-13T15:52:33.763142339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:33.790494 systemd[1]: Started cri-containerd-55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7.scope - libcontainer container 55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7. Feb 13 15:52:33.817511 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:52:33.889295 systemd-networkd[1423]: cali0c20e3dbfba: Link UP Feb 13 15:52:33.889683 systemd-networkd[1423]: cali0c20e3dbfba: Gained carrier Feb 13 15:52:33.891810 containerd[1485]: time="2025-02-13T15:52:33.891775804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-q5kq7,Uid:38b0921d-4d85-4317-86e9-1adbb9d6859a,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7\"" Feb 13 15:52:33.911435 systemd-networkd[1423]: cali33fa7b45e3c: Link UP Feb 13 15:52:33.911647 systemd-networkd[1423]: cali33fa7b45e3c: Gained carrier Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.637 [INFO][5441] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0 calico-apiserver-7db6857c7b- calico-apiserver c19fe500-1919-460e-8572-964852191fc0 800 0 2025-02-13 15:52:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7db6857c7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7db6857c7b-lpnmw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0c20e3dbfba [] []}} ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.638 [INFO][5441] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.694 [INFO][5516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" HandleID="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Workload="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.779 [INFO][5516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" HandleID="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Workload="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dec30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7db6857c7b-lpnmw", "timestamp":"2025-02-13 15:52:33.694679022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.779 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.779 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.779 [INFO][5516] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.787 [INFO][5516] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.792 [INFO][5516] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.797 [INFO][5516] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.806 [INFO][5516] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.828 [INFO][5516] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.829 [INFO][5516] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.832 [INFO][5516] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58 Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.848 [INFO][5516] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.856 [INFO][5516] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.857 [INFO][5516] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" host="localhost" Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.857 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:33.917171 containerd[1485]: 2025-02-13 15:52:33.857 [INFO][5516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" HandleID="k8s-pod-network.1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Workload="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" Feb 13 15:52:33.919490 containerd[1485]: 2025-02-13 15:52:33.867 [INFO][5441] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0", GenerateName:"calico-apiserver-7db6857c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c19fe500-1919-460e-8572-964852191fc0", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db6857c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7db6857c7b-lpnmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c20e3dbfba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.919490 containerd[1485]: 2025-02-13 15:52:33.869 [INFO][5441] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" Feb 13 15:52:33.919490 containerd[1485]: 2025-02-13 15:52:33.873 [INFO][5441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c20e3dbfba ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" Feb 13 15:52:33.919490 containerd[1485]: 2025-02-13 15:52:33.894 [INFO][5441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" Feb 13 15:52:33.919490 containerd[1485]: 2025-02-13 15:52:33.900 [INFO][5441] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0", GenerateName:"calico-apiserver-7db6857c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"c19fe500-1919-460e-8572-964852191fc0", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db6857c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58", Pod:"calico-apiserver-7db6857c7b-lpnmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c20e3dbfba", MAC:"5e:e3:73:84:44:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.919490 containerd[1485]: 2025-02-13 15:52:33.911 [INFO][5441] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58" Namespace="calico-apiserver" Pod="calico-apiserver-7db6857c7b-lpnmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7db6857c7b--lpnmw-eth0" Feb 13 15:52:33.929864 sshd[5538]: Connection closed by 10.0.0.1 port 36888 Feb 13 15:52:33.931087 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.647 [INFO][5478] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--45d4j-eth0 coredns-76f75df574- kube-system 19512d1a-36c6-49de-8177-c4d469d03fc5 929 0 2025-02-13 15:51:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-45d4j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali33fa7b45e3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.648 [INFO][5478] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.828 [INFO][5604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" HandleID="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Workload="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.851 [INFO][5604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" HandleID="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Workload="localhost-k8s-coredns--76f75df574--45d4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030bc60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-45d4j", "timestamp":"2025-02-13 15:52:33.828851268 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.851 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.857 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.857 [INFO][5604] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.860 [INFO][5604] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.871 [INFO][5604] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.876 [INFO][5604] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.879 [INFO][5604] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.883 [INFO][5604] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.883 [INFO][5604] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.884 [INFO][5604] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584 Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.890 [INFO][5604] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.898 [INFO][5604] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.898 [INFO][5604] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" host="localhost" Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.898 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:33.952666 containerd[1485]: 2025-02-13 15:52:33.898 [INFO][5604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" HandleID="k8s-pod-network.95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Workload="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:33.954254 containerd[1485]: 2025-02-13 15:52:33.902 [INFO][5478] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--45d4j-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"19512d1a-36c6-49de-8177-c4d469d03fc5", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-45d4j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33fa7b45e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.954254 containerd[1485]: 2025-02-13 15:52:33.903 [INFO][5478] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:33.954254 containerd[1485]: 2025-02-13 15:52:33.903 [INFO][5478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33fa7b45e3c ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:33.954254 containerd[1485]: 2025-02-13 15:52:33.911 [INFO][5478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:33.954254 containerd[1485]: 2025-02-13 15:52:33.913 [INFO][5478] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--45d4j-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"19512d1a-36c6-49de-8177-c4d469d03fc5", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584", Pod:"coredns-76f75df574-45d4j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33fa7b45e3c", MAC:"ca:32:bb:69:5d:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:33.954254 containerd[1485]: 2025-02-13 15:52:33.927 [INFO][5478] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584" Namespace="kube-system" Pod="coredns-76f75df574-45d4j" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--45d4j-eth0" Feb 13 15:52:33.953652 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:36888.service: Deactivated successfully. Feb 13 15:52:33.960556 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:52:33.963710 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:52:33.972095 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:36902.service - OpenSSH per-connection server daemon (10.0.0.1:36902). Feb 13 15:52:33.973670 systemd-logind[1471]: Removed session 15. Feb 13 15:52:34.096265 containerd[1485]: time="2025-02-13T15:52:34.095762024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:34.096265 containerd[1485]: time="2025-02-13T15:52:34.095842910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:34.096265 containerd[1485]: time="2025-02-13T15:52:34.095857539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.096265 containerd[1485]: time="2025-02-13T15:52:34.096019742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.106273 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 36902 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:34.107337 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:34.107879 containerd[1485]: time="2025-02-13T15:52:34.107325088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:34.107879 containerd[1485]: time="2025-02-13T15:52:34.107381828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:34.107879 containerd[1485]: time="2025-02-13T15:52:34.107397939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.107879 containerd[1485]: time="2025-02-13T15:52:34.107529824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.139120 systemd-logind[1471]: New session 16 of user core. Feb 13 15:52:34.146259 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:52:34.151592 systemd[1]: Started cri-containerd-1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58.scope - libcontainer container 1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58. Feb 13 15:52:34.154375 systemd[1]: Started cri-containerd-95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584.scope - libcontainer container 95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584. Feb 13 15:52:34.175535 systemd-networkd[1423]: calidd453e6e113: Link UP Feb 13 15:52:34.178604 systemd-networkd[1423]: calidd453e6e113: Gained carrier Feb 13 15:52:34.190580 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:33.864 [INFO][5595] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0 calico-kube-controllers-68d59db744- calico-system 6af1b9f5-51e6-4450-99d8-629fc2031232 930 0 2025-02-13 15:52:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68d59db744 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68d59db744-jwpsr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidd453e6e113 [] []}} ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:33.864 [INFO][5595] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:33.948 [INFO][5657] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" HandleID="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Workload="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.076 [INFO][5657] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" HandleID="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Workload="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b2640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68d59db744-jwpsr", "timestamp":"2025-02-13 15:52:33.947960338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.076 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.076 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.076 [INFO][5657] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.078 [INFO][5657] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.084 [INFO][5657] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.089 [INFO][5657] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.091 [INFO][5657] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.093 [INFO][5657] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.094 [INFO][5657] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.096 [INFO][5657] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513 Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.107 [INFO][5657] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.132 [INFO][5657] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.132 [INFO][5657] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" host="localhost" Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.132 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:34.198387 containerd[1485]: 2025-02-13 15:52:34.132 [INFO][5657] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" HandleID="k8s-pod-network.5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Workload="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:34.199162 containerd[1485]: 2025-02-13 15:52:34.151 [INFO][5595] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0", GenerateName:"calico-kube-controllers-68d59db744-", Namespace:"calico-system", SelfLink:"", UID:"6af1b9f5-51e6-4450-99d8-629fc2031232", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d59db744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68d59db744-jwpsr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd453e6e113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:34.199162 containerd[1485]: 2025-02-13 15:52:34.152 [INFO][5595] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:34.199162 containerd[1485]: 2025-02-13 15:52:34.152 [INFO][5595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd453e6e113 ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:34.199162 containerd[1485]: 2025-02-13 15:52:34.182 [INFO][5595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:34.199162 containerd[1485]: 2025-02-13 15:52:34.182 [INFO][5595] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0", GenerateName:"calico-kube-controllers-68d59db744-", Namespace:"calico-system", SelfLink:"", UID:"6af1b9f5-51e6-4450-99d8-629fc2031232", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d59db744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513", Pod:"calico-kube-controllers-68d59db744-jwpsr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd453e6e113", MAC:"3a:9c:33:ed:d1:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:34.199162 containerd[1485]: 2025-02-13 15:52:34.194 [INFO][5595] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513" Namespace="calico-system" Pod="calico-kube-controllers-68d59db744-jwpsr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d59db744--jwpsr-eth0" Feb 13 15:52:34.223004 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:52:34.236174 systemd-networkd[1423]: caliaaad2512dc6: Link UP Feb 13 15:52:34.237327 systemd-networkd[1423]: caliaaad2512dc6: Gained carrier Feb 13 15:52:34.243242 containerd[1485]: time="2025-02-13T15:52:34.242420919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:34.243242 containerd[1485]: time="2025-02-13T15:52:34.242474452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:34.243242 containerd[1485]: time="2025-02-13T15:52:34.242487969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.243242 containerd[1485]: time="2025-02-13T15:52:34.242558014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.257235 containerd[1485]: time="2025-02-13T15:52:34.256870879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-45d4j,Uid:19512d1a-36c6-49de-8177-c4d469d03fc5,Namespace:kube-system,Attempt:6,} returns sandbox id \"95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584\"" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:33.867 [INFO][5620] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--mlzzh-eth0 coredns-76f75df574- kube-system 08a0764f-6eaa-4b6b-8f68-f508a36d326a 801 0 2025-02-13 15:51:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-mlzzh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaaad2512dc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:33.867 [INFO][5620] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-eth0" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:33.965 [INFO][5663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" HandleID="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Workload="localhost-k8s-coredns--76f75df574--mlzzh-eth0" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.082 [INFO][5663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" HandleID="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Workload="localhost-k8s-coredns--76f75df574--mlzzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c51d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-mlzzh", "timestamp":"2025-02-13 15:52:33.965866656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.083 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.135 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.141 [INFO][5663] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.145 [INFO][5663] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.164 [INFO][5663] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.186 [INFO][5663] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.189 [INFO][5663] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.192 [INFO][5663] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.193 [INFO][5663] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.196 [INFO][5663] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.203 [INFO][5663] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.216 [INFO][5663] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.216 [INFO][5663] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" host="localhost" Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.216 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:52:34.264742 containerd[1485]: 2025-02-13 15:52:34.216 [INFO][5663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" HandleID="k8s-pod-network.a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Workload="localhost-k8s-coredns--76f75df574--mlzzh-eth0" Feb 13 15:52:34.265328 containerd[1485]: 2025-02-13 15:52:34.231 [INFO][5620] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mlzzh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"08a0764f-6eaa-4b6b-8f68-f508a36d326a", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-mlzzh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaaad2512dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:34.265328 containerd[1485]: 2025-02-13 15:52:34.231 [INFO][5620] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-eth0" Feb 13 15:52:34.265328 containerd[1485]: 2025-02-13 15:52:34.232 [INFO][5620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaaad2512dc6 ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-eth0" Feb 13 15:52:34.265328 containerd[1485]: 2025-02-13 15:52:34.236 [INFO][5620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-eth0" Feb 13 15:52:34.265328 containerd[1485]: 2025-02-13 15:52:34.237 [INFO][5620] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mlzzh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"08a0764f-6eaa-4b6b-8f68-f508a36d326a", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa", Pod:"coredns-76f75df574-mlzzh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaaad2512dc6", MAC:"d2:bc:fe:1b:1b:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:52:34.265328 containerd[1485]: 2025-02-13 15:52:34.256 [INFO][5620] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa" Namespace="kube-system" Pod="coredns-76f75df574-mlzzh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mlzzh-eth0" Feb 13 15:52:34.265224 systemd[1]: Started cri-containerd-5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513.scope - libcontainer container 5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513. Feb 13 15:52:34.272294 kubelet[2674]: E0213 15:52:34.271178 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:34.287343 containerd[1485]: time="2025-02-13T15:52:34.286841492Z" level=info msg="CreateContainer within sandbox \"95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:52:34.292039 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:52:34.308851 containerd[1485]: time="2025-02-13T15:52:34.308799110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db6857c7b-lpnmw,Uid:c19fe500-1919-460e-8572-964852191fc0,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58\"" Feb 13 15:52:34.320463 sshd[5757]: Connection closed by 10.0.0.1 port 36902 Feb 13 15:52:34.322987 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:34.328076 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:36902.service: Deactivated successfully. Feb 13 15:52:34.328296 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:52:34.331621 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:52:34.333361 systemd-logind[1471]: Removed session 16. Feb 13 15:52:34.338658 containerd[1485]: time="2025-02-13T15:52:34.331250995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:34.338658 containerd[1485]: time="2025-02-13T15:52:34.331295672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:34.338658 containerd[1485]: time="2025-02-13T15:52:34.331306021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.338658 containerd[1485]: time="2025-02-13T15:52:34.331368152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:34.340130 containerd[1485]: time="2025-02-13T15:52:34.339862699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d59db744-jwpsr,Uid:6af1b9f5-51e6-4450-99d8-629fc2031232,Namespace:calico-system,Attempt:6,} returns sandbox id \"5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513\"" Feb 13 15:52:34.351426 kubelet[2674]: E0213 15:52:34.351333 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:34.354376 systemd[1]: Started cri-containerd-a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa.scope - libcontainer container a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa. Feb 13 15:52:34.367236 containerd[1485]: time="2025-02-13T15:52:34.367159007Z" level=info msg="CreateContainer within sandbox \"95c84198c37cca7ad28a787a16891e874da9ed3092cc9fa1559d0bd06a31a584\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2e1fb3e5e8a243d213b900c58a626436b0b6bd39c0775f76a4a0ff6477773ca\"" Feb 13 15:52:34.368265 containerd[1485]: time="2025-02-13T15:52:34.368233717Z" level=info msg="StartContainer for \"a2e1fb3e5e8a243d213b900c58a626436b0b6bd39c0775f76a4a0ff6477773ca\"" Feb 13 15:52:34.372904 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:52:34.401853 systemd[1]: Started cri-containerd-a2e1fb3e5e8a243d213b900c58a626436b0b6bd39c0775f76a4a0ff6477773ca.scope - libcontainer container a2e1fb3e5e8a243d213b900c58a626436b0b6bd39c0775f76a4a0ff6477773ca. Feb 13 15:52:34.407823 containerd[1485]: time="2025-02-13T15:52:34.407770260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mlzzh,Uid:08a0764f-6eaa-4b6b-8f68-f508a36d326a,Namespace:kube-system,Attempt:7,} returns sandbox id \"a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa\"" Feb 13 15:52:34.408595 kubelet[2674]: E0213 15:52:34.408569 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:34.411313 containerd[1485]: time="2025-02-13T15:52:34.411253681Z" level=info msg="CreateContainer within sandbox \"a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:52:34.448494 containerd[1485]: time="2025-02-13T15:52:34.448444124Z" level=info msg="CreateContainer within sandbox \"a1dcf2815fb82c3149d37cd1b8abec11e6c2fca27823c898222e7c2e593633aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27b9fd680ecf6b8fb8509615013ef69346b9a55fcc35e7e92f6886a74bf7005b\"" Feb 13 15:52:34.450347 containerd[1485]: time="2025-02-13T15:52:34.449598948Z" level=info msg="StartContainer for \"27b9fd680ecf6b8fb8509615013ef69346b9a55fcc35e7e92f6886a74bf7005b\"" Feb 13 15:52:34.481263 systemd[1]: Started cri-containerd-27b9fd680ecf6b8fb8509615013ef69346b9a55fcc35e7e92f6886a74bf7005b.scope - libcontainer container 27b9fd680ecf6b8fb8509615013ef69346b9a55fcc35e7e92f6886a74bf7005b. Feb 13 15:52:34.488376 containerd[1485]: time="2025-02-13T15:52:34.488338149Z" level=info msg="StartContainer for \"a2e1fb3e5e8a243d213b900c58a626436b0b6bd39c0775f76a4a0ff6477773ca\" returns successfully" Feb 13 15:52:34.534576 containerd[1485]: time="2025-02-13T15:52:34.534506114Z" level=info msg="StartContainer for \"27b9fd680ecf6b8fb8509615013ef69346b9a55fcc35e7e92f6886a74bf7005b\" returns successfully" Feb 13 15:52:35.082201 systemd-networkd[1423]: cali162d1fe861b: Gained IPv6LL Feb 13 15:52:35.274268 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Feb 13 15:52:35.338341 systemd-networkd[1423]: cali0c20e3dbfba: Gained IPv6LL Feb 13 15:52:35.355944 kubelet[2674]: E0213 15:52:35.355914 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:35.360395 kubelet[2674]: E0213 15:52:35.360369 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:35.368099 kubelet[2674]: I0213 15:52:35.368017 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mlzzh" podStartSLOduration=40.367937286 podStartE2EDuration="40.367937286s" podCreationTimestamp="2025-02-13 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:52:35.367172999 +0000 UTC m=+55.022548718" watchObservedRunningTime="2025-02-13 15:52:35.367937286 +0000 UTC m=+55.023313015" Feb 13 15:52:35.376952 kubelet[2674]: I0213 15:52:35.376902 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-45d4j" podStartSLOduration=40.376855328 podStartE2EDuration="40.376855328s" podCreationTimestamp="2025-02-13 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:52:35.376688986 +0000 UTC m=+55.032064706" watchObservedRunningTime="2025-02-13 15:52:35.376855328 +0000 UTC m=+55.032231037" Feb 13 15:52:35.594338 systemd-networkd[1423]: calidd453e6e113: Gained IPv6LL Feb 13 15:52:35.786195 systemd-networkd[1423]: cali33fa7b45e3c: Gained IPv6LL Feb 13 15:52:35.914196 systemd-networkd[1423]: caliaaad2512dc6: Gained IPv6LL Feb 13 15:52:36.345110 containerd[1485]: time="2025-02-13T15:52:36.345000510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:36.346086 containerd[1485]: time="2025-02-13T15:52:36.346054556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:52:36.348431 containerd[1485]: time="2025-02-13T15:52:36.348381481Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:36.350863 containerd[1485]: time="2025-02-13T15:52:36.350819731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:36.351698 containerd[1485]: time="2025-02-13T15:52:36.351668992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.647298924s" Feb 13 15:52:36.351774 containerd[1485]: time="2025-02-13T15:52:36.351704330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:52:36.352662 containerd[1485]: time="2025-02-13T15:52:36.352483635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:52:36.353299 containerd[1485]: time="2025-02-13T15:52:36.353277018Z" level=info msg="CreateContainer within sandbox \"093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:52:36.367172 kubelet[2674]: E0213 15:52:36.366567 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:36.367172 kubelet[2674]: E0213 15:52:36.366821 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:36.389036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276380779.mount: Deactivated successfully. Feb 13 15:52:36.398525 containerd[1485]: time="2025-02-13T15:52:36.398474591Z" level=info msg="CreateContainer within sandbox \"093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0ef0914052923986781fdeefe6dd8d3dafb57d85f4f1b9d288cddc928234166d\"" Feb 13 15:52:36.399589 containerd[1485]: time="2025-02-13T15:52:36.399533617Z" level=info msg="StartContainer for \"0ef0914052923986781fdeefe6dd8d3dafb57d85f4f1b9d288cddc928234166d\"" Feb 13 15:52:36.436201 systemd[1]: Started cri-containerd-0ef0914052923986781fdeefe6dd8d3dafb57d85f4f1b9d288cddc928234166d.scope - libcontainer container 0ef0914052923986781fdeefe6dd8d3dafb57d85f4f1b9d288cddc928234166d. Feb 13 15:52:36.469249 containerd[1485]: time="2025-02-13T15:52:36.469201491Z" level=info msg="StartContainer for \"0ef0914052923986781fdeefe6dd8d3dafb57d85f4f1b9d288cddc928234166d\" returns successfully" Feb 13 15:52:37.371812 kubelet[2674]: E0213 15:52:37.371767 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:37.371812 kubelet[2674]: E0213 15:52:37.371817 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:38.374967 kubelet[2674]: E0213 15:52:38.374909 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:39.332181 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:49570.service - OpenSSH per-connection server daemon (10.0.0.1:49570). Feb 13 15:52:39.437836 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 49570 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:39.439428 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:39.443410 systemd-logind[1471]: New session 17 of user core. Feb 13 15:52:39.458175 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:52:39.624264 sshd[6080]: Connection closed by 10.0.0.1 port 49570 Feb 13 15:52:39.624566 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:39.628118 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:49570.service: Deactivated successfully. Feb 13 15:52:39.629938 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:52:39.630522 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:52:39.631502 systemd-logind[1471]: Removed session 17. Feb 13 15:52:39.648197 containerd[1485]: time="2025-02-13T15:52:39.648148943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:39.649703 containerd[1485]: time="2025-02-13T15:52:39.649651178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 15:52:39.652087 containerd[1485]: time="2025-02-13T15:52:39.652056124Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:39.654405 containerd[1485]: time="2025-02-13T15:52:39.654375634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:39.655123 containerd[1485]: time="2025-02-13T15:52:39.655089981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.302578441s" Feb 13 15:52:39.655123 containerd[1485]: time="2025-02-13T15:52:39.655118826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:52:39.655727 containerd[1485]: time="2025-02-13T15:52:39.655690799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:52:39.656775 containerd[1485]: time="2025-02-13T15:52:39.656751162Z" level=info msg="CreateContainer within sandbox \"55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:52:39.674347 containerd[1485]: time="2025-02-13T15:52:39.674311628Z" level=info msg="CreateContainer within sandbox \"55cb16260eb8363d96437cab8e154bd7f1d3e1d8f5cb8f0f259f8eb253ced6f7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0f21ebfba8efae58be3f8c9c3d5751c654b5a0d980d7eb0cef7e7ce8dc5bedbf\"" Feb 13 15:52:39.674997 containerd[1485]: time="2025-02-13T15:52:39.674943686Z" level=info msg="StartContainer for \"0f21ebfba8efae58be3f8c9c3d5751c654b5a0d980d7eb0cef7e7ce8dc5bedbf\"" Feb 13 15:52:39.709250 systemd[1]: Started cri-containerd-0f21ebfba8efae58be3f8c9c3d5751c654b5a0d980d7eb0cef7e7ce8dc5bedbf.scope - libcontainer container 0f21ebfba8efae58be3f8c9c3d5751c654b5a0d980d7eb0cef7e7ce8dc5bedbf. Feb 13 15:52:39.756366 containerd[1485]: time="2025-02-13T15:52:39.756306059Z" level=info msg="StartContainer for \"0f21ebfba8efae58be3f8c9c3d5751c654b5a0d980d7eb0cef7e7ce8dc5bedbf\" returns successfully" Feb 13 15:52:40.150633 containerd[1485]: time="2025-02-13T15:52:40.150584092Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:40.152307 containerd[1485]: time="2025-02-13T15:52:40.151588146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:52:40.153806 containerd[1485]: time="2025-02-13T15:52:40.153775960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 498.060874ms" Feb 13 15:52:40.153897 containerd[1485]: time="2025-02-13T15:52:40.153809884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:52:40.154622 containerd[1485]: time="2025-02-13T15:52:40.154346398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:52:40.156421 containerd[1485]: time="2025-02-13T15:52:40.156380927Z" level=info msg="CreateContainer within sandbox \"1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:52:40.172154 containerd[1485]: time="2025-02-13T15:52:40.172108487Z" level=info msg="CreateContainer within sandbox \"1f201f44993387c0250288e8c9e09bc524a3b11e17a70158929c26b36be07b58\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef77accdf5edbb33c231c33a770338dcabaf41d3e5696ca1c01f0a61c87678d1\"" Feb 13 15:52:40.172585 containerd[1485]: time="2025-02-13T15:52:40.172553784Z" level=info msg="StartContainer for \"ef77accdf5edbb33c231c33a770338dcabaf41d3e5696ca1c01f0a61c87678d1\"" Feb 13 15:52:40.198249 systemd[1]: Started cri-containerd-ef77accdf5edbb33c231c33a770338dcabaf41d3e5696ca1c01f0a61c87678d1.scope - libcontainer container ef77accdf5edbb33c231c33a770338dcabaf41d3e5696ca1c01f0a61c87678d1. Feb 13 15:52:40.243124 containerd[1485]: time="2025-02-13T15:52:40.243012024Z" level=info msg="StartContainer for \"ef77accdf5edbb33c231c33a770338dcabaf41d3e5696ca1c01f0a61c87678d1\" returns successfully" Feb 13 15:52:40.391853 kubelet[2674]: I0213 15:52:40.391801 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7db6857c7b-lpnmw" podStartSLOduration=27.575737702 podStartE2EDuration="33.391375957s" podCreationTimestamp="2025-02-13 15:52:07 +0000 UTC" firstStartedPulling="2025-02-13 15:52:34.33849934 +0000 UTC m=+53.993875050" lastFinishedPulling="2025-02-13 15:52:40.154137596 +0000 UTC m=+59.809513305" observedRunningTime="2025-02-13 15:52:40.390959324 +0000 UTC m=+60.046335043" watchObservedRunningTime="2025-02-13 15:52:40.391375957 +0000 UTC m=+60.046751666" Feb 13 15:52:40.417852 containerd[1485]: time="2025-02-13T15:52:40.417732589Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:40.417852 containerd[1485]: time="2025-02-13T15:52:40.417842720Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:40.417852 containerd[1485]: time="2025-02-13T15:52:40.417852930Z" level=info msg="StopPodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:40.426694 containerd[1485]: time="2025-02-13T15:52:40.426650865Z" level=info msg="RemovePodSandbox for \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:40.442816 containerd[1485]: time="2025-02-13T15:52:40.441027333Z" level=info msg="Forcibly stopping sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\"" Feb 13 15:52:40.442816 containerd[1485]: time="2025-02-13T15:52:40.441221096Z" level=info msg="TearDown network for sandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" successfully" Feb 13 15:52:40.470130 containerd[1485]: time="2025-02-13T15:52:40.470079205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.470390 containerd[1485]: time="2025-02-13T15:52:40.470151274Z" level=info msg="RemovePodSandbox \"1d0d955bb258255de661f6089b7920a63020b0ca069ec3db338e409cd71af282\" returns successfully" Feb 13 15:52:40.471089 containerd[1485]: time="2025-02-13T15:52:40.471015760Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:40.471226 containerd[1485]: time="2025-02-13T15:52:40.471134558Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:40.471226 containerd[1485]: time="2025-02-13T15:52:40.471144086Z" level=info msg="StopPodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:40.473614 containerd[1485]: time="2025-02-13T15:52:40.471424537Z" level=info msg="RemovePodSandbox for \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:40.473614 containerd[1485]: time="2025-02-13T15:52:40.471442342Z" level=info msg="Forcibly stopping sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\"" Feb 13 15:52:40.473614 containerd[1485]: time="2025-02-13T15:52:40.471510031Z" level=info msg="TearDown network for sandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" successfully" Feb 13 15:52:40.477451 containerd[1485]: time="2025-02-13T15:52:40.477421537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.477588 containerd[1485]: time="2025-02-13T15:52:40.477574150Z" level=info msg="RemovePodSandbox \"ff23c52d0bf668477d0db72e5cd1b64985b3a96d563d6ccc20963d0e266cf0b5\" returns successfully" Feb 13 15:52:40.478071 containerd[1485]: time="2025-02-13T15:52:40.478028145Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:40.481228 containerd[1485]: time="2025-02-13T15:52:40.481199232Z" level=info msg="TearDown network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" successfully" Feb 13 15:52:40.481322 containerd[1485]: time="2025-02-13T15:52:40.481309535Z" level=info msg="StopPodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" returns successfully" Feb 13 15:52:40.485185 containerd[1485]: time="2025-02-13T15:52:40.485151604Z" level=info msg="RemovePodSandbox for \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:40.485185 containerd[1485]: time="2025-02-13T15:52:40.485187904Z" level=info msg="Forcibly stopping sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\"" Feb 13 15:52:40.485331 containerd[1485]: time="2025-02-13T15:52:40.485276726Z" level=info msg="TearDown network for sandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" successfully" Feb 13 15:52:40.493801 containerd[1485]: time="2025-02-13T15:52:40.493701531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.493801 containerd[1485]: time="2025-02-13T15:52:40.493802476Z" level=info msg="RemovePodSandbox \"edfdc59f343ea3bcd7cb5fb41db74294dbc60f5cd2b4647c2c8a5fdade59c79e\" returns successfully" Feb 13 15:52:40.494198 containerd[1485]: time="2025-02-13T15:52:40.494165886Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" Feb 13 15:52:40.494324 containerd[1485]: time="2025-02-13T15:52:40.494270327Z" level=info msg="TearDown network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" successfully" Feb 13 15:52:40.494324 containerd[1485]: time="2025-02-13T15:52:40.494310995Z" level=info msg="StopPodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" returns successfully" Feb 13 15:52:40.494688 containerd[1485]: time="2025-02-13T15:52:40.494643115Z" level=info msg="RemovePodSandbox for \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" Feb 13 15:52:40.494688 containerd[1485]: time="2025-02-13T15:52:40.494680898Z" level=info msg="Forcibly stopping sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\"" Feb 13 15:52:40.494772 containerd[1485]: time="2025-02-13T15:52:40.494742126Z" level=info msg="TearDown network for sandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" successfully" Feb 13 15:52:40.498199 containerd[1485]: time="2025-02-13T15:52:40.498169646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.498249 containerd[1485]: time="2025-02-13T15:52:40.498204163Z" level=info msg="RemovePodSandbox \"115a7b90fd74f188605948db6c4a642fa6a2bc7b610ff5197e80bd7d90895b22\" returns successfully" Feb 13 15:52:40.498645 containerd[1485]: time="2025-02-13T15:52:40.498617459Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\"" Feb 13 15:52:40.499011 containerd[1485]: time="2025-02-13T15:52:40.498862341Z" level=info msg="TearDown network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" successfully" Feb 13 15:52:40.499011 containerd[1485]: time="2025-02-13T15:52:40.498909111Z" level=info msg="StopPodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" returns successfully" Feb 13 15:52:40.499179 containerd[1485]: time="2025-02-13T15:52:40.499154162Z" level=info msg="RemovePodSandbox for \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\"" Feb 13 15:52:40.499216 containerd[1485]: time="2025-02-13T15:52:40.499181005Z" level=info msg="Forcibly stopping sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\"" Feb 13 15:52:40.499277 containerd[1485]: time="2025-02-13T15:52:40.499247893Z" level=info msg="TearDown network for sandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" successfully" Feb 13 15:52:40.503123 containerd[1485]: time="2025-02-13T15:52:40.503099802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.503181 containerd[1485]: time="2025-02-13T15:52:40.503131022Z" level=info msg="RemovePodSandbox \"7b202e0a8250435728e02f8bde94c825ab1ad61219ed93302f1468cb1f73698f\" returns successfully" Feb 13 15:52:40.503363 containerd[1485]: time="2025-02-13T15:52:40.503344193Z" level=info msg="StopPodSandbox for \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\"" Feb 13 15:52:40.503464 containerd[1485]: time="2025-02-13T15:52:40.503440098Z" level=info msg="TearDown network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" successfully" Feb 13 15:52:40.503464 containerd[1485]: time="2025-02-13T15:52:40.503452431Z" level=info msg="StopPodSandbox for \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" returns successfully" Feb 13 15:52:40.503686 containerd[1485]: time="2025-02-13T15:52:40.503662957Z" level=info msg="RemovePodSandbox for \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\"" Feb 13 15:52:40.503722 containerd[1485]: time="2025-02-13T15:52:40.503687634Z" level=info msg="Forcibly stopping sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\"" Feb 13 15:52:40.503803 containerd[1485]: time="2025-02-13T15:52:40.503762318Z" level=info msg="TearDown network for sandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" successfully" Feb 13 15:52:40.507399 containerd[1485]: time="2025-02-13T15:52:40.507369756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.507479 containerd[1485]: time="2025-02-13T15:52:40.507400985Z" level=info msg="RemovePodSandbox \"529adf14b2e5a6d88d507a8c413eef03858081a1b5e72550a2d1564216663e83\" returns successfully" Feb 13 15:52:40.507733 containerd[1485]: time="2025-02-13T15:52:40.507713427Z" level=info msg="StopPodSandbox for \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\"" Feb 13 15:52:40.507879 containerd[1485]: time="2025-02-13T15:52:40.507856413Z" level=info msg="TearDown network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\" successfully" Feb 13 15:52:40.507879 containerd[1485]: time="2025-02-13T15:52:40.507873475Z" level=info msg="StopPodSandbox for \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\" returns successfully" Feb 13 15:52:40.508207 containerd[1485]: time="2025-02-13T15:52:40.508175077Z" level=info msg="RemovePodSandbox for \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\"" Feb 13 15:52:40.508207 containerd[1485]: time="2025-02-13T15:52:40.508200646Z" level=info msg="Forcibly stopping sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\"" Feb 13 15:52:40.508294 containerd[1485]: time="2025-02-13T15:52:40.508269618Z" level=info msg="TearDown network for sandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\" successfully" Feb 13 15:52:40.511518 containerd[1485]: time="2025-02-13T15:52:40.511486725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.511566 containerd[1485]: time="2025-02-13T15:52:40.511518526Z" level=info msg="RemovePodSandbox \"800e5ed77407e7053aafda7c09deaaace97d671b0c1a8ed1dde7fd6cf8b96963\" returns successfully" Feb 13 15:52:40.511775 containerd[1485]: time="2025-02-13T15:52:40.511748539Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:40.511881 containerd[1485]: time="2025-02-13T15:52:40.511835376Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:40.511881 containerd[1485]: time="2025-02-13T15:52:40.511873730Z" level=info msg="StopPodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:40.514067 containerd[1485]: time="2025-02-13T15:52:40.512218324Z" level=info msg="RemovePodSandbox for \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:40.514067 containerd[1485]: time="2025-02-13T15:52:40.512241158Z" level=info msg="Forcibly stopping sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\"" Feb 13 15:52:40.514067 containerd[1485]: time="2025-02-13T15:52:40.512312906Z" level=info msg="TearDown network for sandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" successfully" Feb 13 15:52:40.515813 containerd[1485]: time="2025-02-13T15:52:40.515776116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.515813 containerd[1485]: time="2025-02-13T15:52:40.515822816Z" level=info msg="RemovePodSandbox \"b0dc14c8972521f8f7dc7aceb4f435e187bf1e763bec4049aae4f80c4df8715e\" returns successfully" Feb 13 15:52:40.516070 containerd[1485]: time="2025-02-13T15:52:40.516032139Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:40.516146 containerd[1485]: time="2025-02-13T15:52:40.516120659Z" level=info msg="TearDown network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" successfully" Feb 13 15:52:40.516146 containerd[1485]: time="2025-02-13T15:52:40.516136570Z" level=info msg="StopPodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" returns successfully" Feb 13 15:52:40.516362 containerd[1485]: time="2025-02-13T15:52:40.516324452Z" level=info msg="RemovePodSandbox for \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:40.516362 containerd[1485]: time="2025-02-13T15:52:40.516344621Z" level=info msg="Forcibly stopping sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\"" Feb 13 15:52:40.516446 containerd[1485]: time="2025-02-13T15:52:40.516416990Z" level=info msg="TearDown network for sandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" successfully" Feb 13 15:52:40.519910 containerd[1485]: time="2025-02-13T15:52:40.519878076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.519993 containerd[1485]: time="2025-02-13T15:52:40.519915368Z" level=info msg="RemovePodSandbox \"88c81a908b996cb50fdebfcb8c64cb23a582bf97937c56173f945983acbab14c\" returns successfully" Feb 13 15:52:40.520181 containerd[1485]: time="2025-02-13T15:52:40.520141313Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" Feb 13 15:52:40.520239 containerd[1485]: time="2025-02-13T15:52:40.520225365Z" level=info msg="TearDown network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" successfully" Feb 13 15:52:40.520266 containerd[1485]: time="2025-02-13T15:52:40.520238921Z" level=info msg="StopPodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" returns successfully" Feb 13 15:52:40.520495 containerd[1485]: time="2025-02-13T15:52:40.520468113Z" level=info msg="RemovePodSandbox for \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" Feb 13 15:52:40.526444 containerd[1485]: time="2025-02-13T15:52:40.520491948Z" level=info msg="Forcibly stopping sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\"" Feb 13 15:52:40.526531 containerd[1485]: time="2025-02-13T15:52:40.526493567Z" level=info msg="TearDown network for sandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" successfully" Feb 13 15:52:40.529763 containerd[1485]: time="2025-02-13T15:52:40.529713730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.529763 containerd[1485]: time="2025-02-13T15:52:40.529746262Z" level=info msg="RemovePodSandbox \"df6f7abed86db3382c32d12030c43f2555823cf6e979dd79f3a63308cf66d6f4\" returns successfully" Feb 13 15:52:40.529999 containerd[1485]: time="2025-02-13T15:52:40.529962458Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\"" Feb 13 15:52:40.530298 containerd[1485]: time="2025-02-13T15:52:40.530071809Z" level=info msg="TearDown network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" successfully" Feb 13 15:52:40.530298 containerd[1485]: time="2025-02-13T15:52:40.530084182Z" level=info msg="StopPodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" returns successfully" Feb 13 15:52:40.530591 containerd[1485]: time="2025-02-13T15:52:40.530561382Z" level=info msg="RemovePodSandbox for \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\"" Feb 13 15:52:40.530591 containerd[1485]: time="2025-02-13T15:52:40.530590358Z" level=info msg="Forcibly stopping sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\"" Feb 13 15:52:40.530687 containerd[1485]: time="2025-02-13T15:52:40.530660293Z" level=info msg="TearDown network for sandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" successfully" Feb 13 15:52:40.533813 containerd[1485]: time="2025-02-13T15:52:40.533782726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.533865 containerd[1485]: time="2025-02-13T15:52:40.533814156Z" level=info msg="RemovePodSandbox \"a11d2ce61fc739661fe63087cbdf4f19cb71af635a90addc6a841f53edbee800\" returns successfully" Feb 13 15:52:40.534064 containerd[1485]: time="2025-02-13T15:52:40.534033790Z" level=info msg="StopPodSandbox for \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\"" Feb 13 15:52:40.534146 containerd[1485]: time="2025-02-13T15:52:40.534132760Z" level=info msg="TearDown network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" successfully" Feb 13 15:52:40.534146 containerd[1485]: time="2025-02-13T15:52:40.534144362Z" level=info msg="StopPodSandbox for \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" returns successfully" Feb 13 15:52:40.534418 containerd[1485]: time="2025-02-13T15:52:40.534393012Z" level=info msg="RemovePodSandbox for \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\"" Feb 13 15:52:40.534418 containerd[1485]: time="2025-02-13T15:52:40.534414593Z" level=info msg="Forcibly stopping sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\"" Feb 13 15:52:40.534569 containerd[1485]: time="2025-02-13T15:52:40.534520156Z" level=info msg="TearDown network for sandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" successfully" Feb 13 15:52:40.537702 containerd[1485]: time="2025-02-13T15:52:40.537673379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.537760 containerd[1485]: time="2025-02-13T15:52:40.537707094Z" level=info msg="RemovePodSandbox \"2e94151bee038cd4d514b4e37eb19d4460fc816ec9b6d4791deaaa2a8030f875\" returns successfully" Feb 13 15:52:40.537946 containerd[1485]: time="2025-02-13T15:52:40.537908011Z" level=info msg="StopPodSandbox for \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\"" Feb 13 15:52:40.538052 containerd[1485]: time="2025-02-13T15:52:40.537992383Z" level=info msg="TearDown network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\" successfully" Feb 13 15:52:40.538052 containerd[1485]: time="2025-02-13T15:52:40.538008314Z" level=info msg="StopPodSandbox for \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\" returns successfully" Feb 13 15:52:40.538293 containerd[1485]: time="2025-02-13T15:52:40.538271030Z" level=info msg="RemovePodSandbox for \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\"" Feb 13 15:52:40.538333 containerd[1485]: time="2025-02-13T15:52:40.538296991Z" level=info msg="Forcibly stopping sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\"" Feb 13 15:52:40.538393 containerd[1485]: time="2025-02-13T15:52:40.538362627Z" level=info msg="TearDown network for sandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\" successfully" Feb 13 15:52:40.544540 containerd[1485]: time="2025-02-13T15:52:40.544519064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.544686 containerd[1485]: time="2025-02-13T15:52:40.544634676Z" level=info msg="RemovePodSandbox \"e4200ee9bb5433be7e929357a9329bbc2b16f2ac0de23e7a4c70312fdd04becb\" returns successfully" Feb 13 15:52:40.544937 containerd[1485]: time="2025-02-13T15:52:40.544903384Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:40.545010 containerd[1485]: time="2025-02-13T15:52:40.544991413Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:40.545010 containerd[1485]: time="2025-02-13T15:52:40.545006643Z" level=info msg="StopPodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:40.545259 containerd[1485]: time="2025-02-13T15:52:40.545229311Z" level=info msg="RemovePodSandbox for \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:40.545259 containerd[1485]: time="2025-02-13T15:52:40.545246795Z" level=info msg="Forcibly stopping sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\"" Feb 13 15:52:40.545377 containerd[1485]: time="2025-02-13T15:52:40.545307101Z" level=info msg="TearDown network for sandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" successfully" Feb 13 15:52:40.549448 containerd[1485]: time="2025-02-13T15:52:40.549421917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.549499 containerd[1485]: time="2025-02-13T15:52:40.549452425Z" level=info msg="RemovePodSandbox \"99ac7617a177e6101b95e459ec08b478f3980e1a2460b275638a3538410490df\" returns successfully" Feb 13 15:52:40.550335 containerd[1485]: time="2025-02-13T15:52:40.549693469Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:40.550335 containerd[1485]: time="2025-02-13T15:52:40.549778083Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:40.550335 containerd[1485]: time="2025-02-13T15:52:40.549788302Z" level=info msg="StopPodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:40.550335 containerd[1485]: time="2025-02-13T15:52:40.550135381Z" level=info msg="RemovePodSandbox for \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:40.550335 containerd[1485]: time="2025-02-13T15:52:40.550289548Z" level=info msg="Forcibly stopping sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\"" Feb 13 15:52:40.550984 containerd[1485]: time="2025-02-13T15:52:40.550943026Z" level=info msg="TearDown network for sandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" successfully" Feb 13 15:52:40.554439 containerd[1485]: time="2025-02-13T15:52:40.554399202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.554439 containerd[1485]: time="2025-02-13T15:52:40.554433018Z" level=info msg="RemovePodSandbox \"84ccc9b1b37da1b008c0dfa6a7140fb3261abba41b0b80fc0e9fb5ae536f8dd3\" returns successfully" Feb 13 15:52:40.554731 containerd[1485]: time="2025-02-13T15:52:40.554709190Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:40.554817 containerd[1485]: time="2025-02-13T15:52:40.554780767Z" level=info msg="TearDown network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" successfully" Feb 13 15:52:40.554817 containerd[1485]: time="2025-02-13T15:52:40.554795626Z" level=info msg="StopPodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" returns successfully" Feb 13 15:52:40.557214 containerd[1485]: time="2025-02-13T15:52:40.555119088Z" level=info msg="RemovePodSandbox for \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:40.557214 containerd[1485]: time="2025-02-13T15:52:40.555144799Z" level=info msg="Forcibly stopping sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\"" Feb 13 15:52:40.557214 containerd[1485]: time="2025-02-13T15:52:40.555214703Z" level=info msg="TearDown network for sandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" successfully" Feb 13 15:52:40.559156 containerd[1485]: time="2025-02-13T15:52:40.559138410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.559238 containerd[1485]: time="2025-02-13T15:52:40.559225768Z" level=info msg="RemovePodSandbox \"ca4fd8af4c6b526c023a22b9c1fb3a2583ddb88070849e67f68281398016870a\" returns successfully" Feb 13 15:52:40.559528 containerd[1485]: time="2025-02-13T15:52:40.559505347Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" Feb 13 15:52:40.559646 containerd[1485]: time="2025-02-13T15:52:40.559594157Z" level=info msg="TearDown network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" successfully" Feb 13 15:52:40.559646 containerd[1485]: time="2025-02-13T15:52:40.559636298Z" level=info msg="StopPodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" returns successfully" Feb 13 15:52:40.559908 containerd[1485]: time="2025-02-13T15:52:40.559888815Z" level=info msg="RemovePodSandbox for \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" Feb 13 15:52:40.559946 containerd[1485]: time="2025-02-13T15:52:40.559910787Z" level=info msg="Forcibly stopping sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\"" Feb 13 15:52:40.560016 containerd[1485]: time="2025-02-13T15:52:40.559986453Z" level=info msg="TearDown network for sandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" successfully" Feb 13 15:52:40.564379 containerd[1485]: time="2025-02-13T15:52:40.564292547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.564379 containerd[1485]: time="2025-02-13T15:52:40.564325360Z" level=info msg="RemovePodSandbox \"89d75f92637d84161a3a05d642197773d8fe53597397d47ba98f36f363042782\" returns successfully" Feb 13 15:52:40.564862 containerd[1485]: time="2025-02-13T15:52:40.564706985Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\"" Feb 13 15:52:40.564862 containerd[1485]: time="2025-02-13T15:52:40.564805285Z" level=info msg="TearDown network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" successfully" Feb 13 15:52:40.564862 containerd[1485]: time="2025-02-13T15:52:40.564815844Z" level=info msg="StopPodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" returns successfully" Feb 13 15:52:40.565080 containerd[1485]: time="2025-02-13T15:52:40.565035708Z" level=info msg="RemovePodSandbox for \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\"" Feb 13 15:52:40.565120 containerd[1485]: time="2025-02-13T15:52:40.565084542Z" level=info msg="Forcibly stopping sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\"" Feb 13 15:52:40.565191 containerd[1485]: time="2025-02-13T15:52:40.565157071Z" level=info msg="TearDown network for sandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" successfully" Feb 13 15:52:40.568839 containerd[1485]: time="2025-02-13T15:52:40.568813354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.568996 containerd[1485]: time="2025-02-13T15:52:40.568919868Z" level=info msg="RemovePodSandbox \"60429a89152784060d7f572c9878da40086e694a3190d5f2e95e78d7c271dd87\" returns successfully" Feb 13 15:52:40.569519 containerd[1485]: time="2025-02-13T15:52:40.569221770Z" level=info msg="StopPodSandbox for \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\"" Feb 13 15:52:40.569519 containerd[1485]: time="2025-02-13T15:52:40.569346160Z" level=info msg="TearDown network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" successfully" Feb 13 15:52:40.569519 containerd[1485]: time="2025-02-13T15:52:40.569358904Z" level=info msg="StopPodSandbox for \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" returns successfully" Feb 13 15:52:40.569630 containerd[1485]: time="2025-02-13T15:52:40.569549621Z" level=info msg="RemovePodSandbox for \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\"" Feb 13 15:52:40.569630 containerd[1485]: time="2025-02-13T15:52:40.569566935Z" level=info msg="Forcibly stopping sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\"" Feb 13 15:52:40.569684 containerd[1485]: time="2025-02-13T15:52:40.569646769Z" level=info msg="TearDown network for sandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" successfully" Feb 13 15:52:40.572872 containerd[1485]: time="2025-02-13T15:52:40.572846761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.572923 containerd[1485]: time="2025-02-13T15:52:40.572893422Z" level=info msg="RemovePodSandbox \"e43fb613ca334456eb8a5b3e016eae6a232a97a1ce0ec5322588fdb82bf4383b\" returns successfully" Feb 13 15:52:40.573202 containerd[1485]: time="2025-02-13T15:52:40.573163982Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:40.573254 containerd[1485]: time="2025-02-13T15:52:40.573241932Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:40.573309 containerd[1485]: time="2025-02-13T15:52:40.573252543Z" level=info msg="StopPodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:40.573478 containerd[1485]: time="2025-02-13T15:52:40.573450705Z" level=info msg="RemovePodSandbox for \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:40.573528 containerd[1485]: time="2025-02-13T15:52:40.573477857Z" level=info msg="Forcibly stopping sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\"" Feb 13 15:52:40.573569 containerd[1485]: time="2025-02-13T15:52:40.573539335Z" level=info msg="TearDown network for sandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" successfully" Feb 13 15:52:40.577027 containerd[1485]: time="2025-02-13T15:52:40.576883066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.577027 containerd[1485]: time="2025-02-13T15:52:40.576942209Z" level=info msg="RemovePodSandbox \"3f57301a43e67e43ec03c8d5f8395e08339b522f906640d9a3cfb28e3deb8bfb\" returns successfully" Feb 13 15:52:40.578266 containerd[1485]: time="2025-02-13T15:52:40.578137932Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:40.578266 containerd[1485]: time="2025-02-13T15:52:40.578218096Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:40.578266 containerd[1485]: time="2025-02-13T15:52:40.578227845Z" level=info msg="StopPodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:40.578420 containerd[1485]: time="2025-02-13T15:52:40.578389286Z" level=info msg="RemovePodSandbox for \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:40.578420 containerd[1485]: time="2025-02-13T15:52:40.578417731Z" level=info msg="Forcibly stopping sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\"" Feb 13 15:52:40.578579 containerd[1485]: time="2025-02-13T15:52:40.578539055Z" level=info msg="TearDown network for sandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" successfully" Feb 13 15:52:40.581935 containerd[1485]: time="2025-02-13T15:52:40.581915758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.582079 containerd[1485]: time="2025-02-13T15:52:40.582017364Z" level=info msg="RemovePodSandbox \"679fc0c4f2b11300f0f4121a07a046a469509c18ac21ac4d256b5ee87ef5409f\" returns successfully" Feb 13 15:52:40.582401 containerd[1485]: time="2025-02-13T15:52:40.582372167Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:40.582486 containerd[1485]: time="2025-02-13T15:52:40.582468182Z" level=info msg="TearDown network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" successfully" Feb 13 15:52:40.582486 containerd[1485]: time="2025-02-13T15:52:40.582483061Z" level=info msg="StopPodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" returns successfully" Feb 13 15:52:40.583240 containerd[1485]: time="2025-02-13T15:52:40.582664490Z" level=info msg="RemovePodSandbox for \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:40.583240 containerd[1485]: time="2025-02-13T15:52:40.582683737Z" level=info msg="Forcibly stopping sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\"" Feb 13 15:52:40.583240 containerd[1485]: time="2025-02-13T15:52:40.582752359Z" level=info msg="TearDown network for sandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" successfully" Feb 13 15:52:40.586338 containerd[1485]: time="2025-02-13T15:52:40.586310262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.586390 containerd[1485]: time="2025-02-13T15:52:40.586341411Z" level=info msg="RemovePodSandbox \"ace4ae3c66fefa8d11c868ac0dd29f45cdd07a540987ea34c95f1c8413e5a4fb\" returns successfully" Feb 13 15:52:40.586693 containerd[1485]: time="2025-02-13T15:52:40.586525497Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" Feb 13 15:52:40.586693 containerd[1485]: time="2025-02-13T15:52:40.586608927Z" level=info msg="TearDown network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" successfully" Feb 13 15:52:40.586693 containerd[1485]: time="2025-02-13T15:52:40.586649044Z" level=info msg="StopPodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" returns successfully" Feb 13 15:52:40.586896 containerd[1485]: time="2025-02-13T15:52:40.586871092Z" level=info msg="RemovePodSandbox for \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" Feb 13 15:52:40.586937 containerd[1485]: time="2025-02-13T15:52:40.586895839Z" level=info msg="Forcibly stopping sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\"" Feb 13 15:52:40.587006 containerd[1485]: time="2025-02-13T15:52:40.586967217Z" level=info msg="TearDown network for sandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" successfully" Feb 13 15:52:40.590429 containerd[1485]: time="2025-02-13T15:52:40.590406821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.590478 containerd[1485]: time="2025-02-13T15:52:40.590437120Z" level=info msg="RemovePodSandbox \"a907415d608139846599318ab511c71cb2815fc522f90b9abae68d6a17bd3efe\" returns successfully" Feb 13 15:52:40.590699 containerd[1485]: time="2025-02-13T15:52:40.590667564Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\"" Feb 13 15:52:40.590774 containerd[1485]: time="2025-02-13T15:52:40.590753479Z" level=info msg="TearDown network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" successfully" Feb 13 15:52:40.590774 containerd[1485]: time="2025-02-13T15:52:40.590763057Z" level=info msg="StopPodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" returns successfully" Feb 13 15:52:40.591005 containerd[1485]: time="2025-02-13T15:52:40.590958844Z" level=info msg="RemovePodSandbox for \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\"" Feb 13 15:52:40.591005 containerd[1485]: time="2025-02-13T15:52:40.590987530Z" level=info msg="Forcibly stopping sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\"" Feb 13 15:52:40.591092 containerd[1485]: time="2025-02-13T15:52:40.591063016Z" level=info msg="TearDown network for sandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" successfully" Feb 13 15:52:40.594701 containerd[1485]: time="2025-02-13T15:52:40.594667667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.594764 containerd[1485]: time="2025-02-13T15:52:40.594709218Z" level=info msg="RemovePodSandbox \"81219097a27dcb504a3f76789f134167e37f8265b25d724bdd46b64457cd0fd0\" returns successfully" Feb 13 15:52:40.594959 containerd[1485]: time="2025-02-13T15:52:40.594922979Z" level=info msg="StopPodSandbox for \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\"" Feb 13 15:52:40.595036 containerd[1485]: time="2025-02-13T15:52:40.595013524Z" level=info msg="TearDown network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" successfully" Feb 13 15:52:40.595036 containerd[1485]: time="2025-02-13T15:52:40.595031458Z" level=info msg="StopPodSandbox for \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" returns successfully" Feb 13 15:52:40.595323 containerd[1485]: time="2025-02-13T15:52:40.595277211Z" level=info msg="RemovePodSandbox for \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\"" Feb 13 15:52:40.595323 containerd[1485]: time="2025-02-13T15:52:40.595299644Z" level=info msg="Forcibly stopping sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\"" Feb 13 15:52:40.595403 containerd[1485]: time="2025-02-13T15:52:40.595363198Z" level=info msg="TearDown network for sandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" successfully" Feb 13 15:52:40.605088 containerd[1485]: time="2025-02-13T15:52:40.605057788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.605183 containerd[1485]: time="2025-02-13T15:52:40.605096112Z" level=info msg="RemovePodSandbox \"d5aaeba253086d20016ba38f2df123d706efab77777bd4e827a20c39f90c20d7\" returns successfully" Feb 13 15:52:40.605352 containerd[1485]: time="2025-02-13T15:52:40.605288804Z" level=info msg="StopPodSandbox for \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\"" Feb 13 15:52:40.605440 containerd[1485]: time="2025-02-13T15:52:40.605378185Z" level=info msg="TearDown network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\" successfully" Feb 13 15:52:40.605440 containerd[1485]: time="2025-02-13T15:52:40.605388786Z" level=info msg="StopPodSandbox for \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\" returns successfully" Feb 13 15:52:40.605660 containerd[1485]: time="2025-02-13T15:52:40.605606626Z" level=info msg="RemovePodSandbox for \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\"" Feb 13 15:52:40.605660 containerd[1485]: time="2025-02-13T15:52:40.605629900Z" level=info msg="Forcibly stopping sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\"" Feb 13 15:52:40.605744 containerd[1485]: time="2025-02-13T15:52:40.605695186Z" level=info msg="TearDown network for sandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\" successfully" Feb 13 15:52:40.610798 containerd[1485]: time="2025-02-13T15:52:40.610769349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.610882 containerd[1485]: time="2025-02-13T15:52:40.610804005Z" level=info msg="RemovePodSandbox \"2b380686f722dafcaa1a62759a902a86ae3db1a699917ec82e1e0c0d87e2c228\" returns successfully" Feb 13 15:52:40.611060 containerd[1485]: time="2025-02-13T15:52:40.610971578Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:40.611105 containerd[1485]: time="2025-02-13T15:52:40.611076731Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:40.611105 containerd[1485]: time="2025-02-13T15:52:40.611086510Z" level=info msg="StopPodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:40.611308 containerd[1485]: time="2025-02-13T15:52:40.611265465Z" level=info msg="RemovePodSandbox for \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:40.611308 containerd[1485]: time="2025-02-13T15:52:40.611288569Z" level=info msg="Forcibly stopping sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\"" Feb 13 15:52:40.611384 containerd[1485]: time="2025-02-13T15:52:40.611348865Z" level=info msg="TearDown network for sandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" successfully" Feb 13 15:52:40.614687 containerd[1485]: time="2025-02-13T15:52:40.614656656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.614746 containerd[1485]: time="2025-02-13T15:52:40.614689639Z" level=info msg="RemovePodSandbox \"84c19453d69c794875743cc2306ac66e7e4149e85a9ab43aadf735ec5bee0895\" returns successfully" Feb 13 15:52:40.615056 containerd[1485]: time="2025-02-13T15:52:40.614877962Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:40.615056 containerd[1485]: time="2025-02-13T15:52:40.614967244Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:40.615056 containerd[1485]: time="2025-02-13T15:52:40.614987002Z" level=info msg="StopPodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:40.615258 containerd[1485]: time="2025-02-13T15:52:40.615188270Z" level=info msg="RemovePodSandbox for \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:40.615258 containerd[1485]: time="2025-02-13T15:52:40.615213628Z" level=info msg="Forcibly stopping sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\"" Feb 13 15:52:40.615320 containerd[1485]: time="2025-02-13T15:52:40.615276820Z" level=info msg="TearDown network for sandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" successfully" Feb 13 15:52:40.618879 containerd[1485]: time="2025-02-13T15:52:40.618848038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.618929 containerd[1485]: time="2025-02-13T15:52:40.618886672Z" level=info msg="RemovePodSandbox \"a7720d7f10e468b058ef023c30df6b681af646727fcd088368a6ad4ad67381e7\" returns successfully" Feb 13 15:52:40.619123 containerd[1485]: time="2025-02-13T15:52:40.619096367Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:40.619204 containerd[1485]: time="2025-02-13T15:52:40.619188553Z" level=info msg="TearDown network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" successfully" Feb 13 15:52:40.619204 containerd[1485]: time="2025-02-13T15:52:40.619198092Z" level=info msg="StopPodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" returns successfully" Feb 13 15:52:40.619480 containerd[1485]: time="2025-02-13T15:52:40.619457782Z" level=info msg="RemovePodSandbox for \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:40.619615 containerd[1485]: time="2025-02-13T15:52:40.619550301Z" level=info msg="Forcibly stopping sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\"" Feb 13 15:52:40.619731 containerd[1485]: time="2025-02-13T15:52:40.619685241Z" level=info msg="TearDown network for sandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" successfully" Feb 13 15:52:40.623568 containerd[1485]: time="2025-02-13T15:52:40.623522421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.623683 containerd[1485]: time="2025-02-13T15:52:40.623563770Z" level=info msg="RemovePodSandbox \"b7aa3ece4c8dab55163a1999a618c3231b63a623c178577e939f9426c1d41f70\" returns successfully" Feb 13 15:52:40.623955 containerd[1485]: time="2025-02-13T15:52:40.623929515Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" Feb 13 15:52:40.624053 containerd[1485]: time="2025-02-13T15:52:40.624024067Z" level=info msg="TearDown network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" successfully" Feb 13 15:52:40.624090 containerd[1485]: time="2025-02-13T15:52:40.624038845Z" level=info msg="StopPodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" returns successfully" Feb 13 15:52:40.624329 containerd[1485]: time="2025-02-13T15:52:40.624285651Z" level=info msg="RemovePodSandbox for \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" Feb 13 15:52:40.624329 containerd[1485]: time="2025-02-13T15:52:40.624307262Z" level=info msg="Forcibly stopping sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\"" Feb 13 15:52:40.624414 containerd[1485]: time="2025-02-13T15:52:40.624369793Z" level=info msg="TearDown network for sandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" successfully" Feb 13 15:52:40.627788 containerd[1485]: time="2025-02-13T15:52:40.627757257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.627991 containerd[1485]: time="2025-02-13T15:52:40.627798506Z" level=info msg="RemovePodSandbox \"ece0ba20c877e7e859caada2183dd1ddc532d1528c0bed4bc30b53e69cf4b158\" returns successfully" Feb 13 15:52:40.628089 containerd[1485]: time="2025-02-13T15:52:40.628060180Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\"" Feb 13 15:52:40.628168 containerd[1485]: time="2025-02-13T15:52:40.628145635Z" level=info msg="TearDown network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" successfully" Feb 13 15:52:40.628168 containerd[1485]: time="2025-02-13T15:52:40.628158940Z" level=info msg="StopPodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" returns successfully" Feb 13 15:52:40.628987 containerd[1485]: time="2025-02-13T15:52:40.628421507Z" level=info msg="RemovePodSandbox for \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\"" Feb 13 15:52:40.628987 containerd[1485]: time="2025-02-13T15:52:40.628446835Z" level=info msg="Forcibly stopping sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\"" Feb 13 15:52:40.628987 containerd[1485]: time="2025-02-13T15:52:40.628532460Z" level=info msg="TearDown network for sandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" successfully" Feb 13 15:52:40.632109 containerd[1485]: time="2025-02-13T15:52:40.632076165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.632176 containerd[1485]: time="2025-02-13T15:52:40.632135489Z" level=info msg="RemovePodSandbox \"caab72e2c1808dc4011bfaf1f2cb16fad58a3babf1202c175475afcef0063ed1\" returns successfully" Feb 13 15:52:40.632556 containerd[1485]: time="2025-02-13T15:52:40.632414226Z" level=info msg="StopPodSandbox for \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\"" Feb 13 15:52:40.632556 containerd[1485]: time="2025-02-13T15:52:40.632492567Z" level=info msg="TearDown network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" successfully" Feb 13 15:52:40.632556 containerd[1485]: time="2025-02-13T15:52:40.632510632Z" level=info msg="StopPodSandbox for \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" returns successfully" Feb 13 15:52:40.632833 containerd[1485]: time="2025-02-13T15:52:40.632817212Z" level=info msg="RemovePodSandbox for \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\"" Feb 13 15:52:40.632885 containerd[1485]: time="2025-02-13T15:52:40.632833403Z" level=info msg="Forcibly stopping sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\"" Feb 13 15:52:40.632947 containerd[1485]: time="2025-02-13T15:52:40.632923798Z" level=info msg="TearDown network for sandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" successfully" Feb 13 15:52:40.636437 containerd[1485]: time="2025-02-13T15:52:40.636413449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.636437 containerd[1485]: time="2025-02-13T15:52:40.636443746Z" level=info msg="RemovePodSandbox \"2fac96938ffde6a93e81d8d00d323b4299dc6bbdddb08abfb5ce8cd0d008b3d9\" returns successfully" Feb 13 15:52:40.636754 containerd[1485]: time="2025-02-13T15:52:40.636705822Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:40.636806 containerd[1485]: time="2025-02-13T15:52:40.636791547Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:40.636829 containerd[1485]: time="2025-02-13T15:52:40.636802617Z" level=info msg="StopPodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:40.637087 containerd[1485]: time="2025-02-13T15:52:40.637065304Z" level=info msg="RemovePodSandbox for \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:40.637136 containerd[1485]: time="2025-02-13T15:52:40.637089450Z" level=info msg="Forcibly stopping sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\"" Feb 13 15:52:40.637183 containerd[1485]: time="2025-02-13T15:52:40.637166579Z" level=info msg="TearDown network for sandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" successfully" Feb 13 15:52:40.640568 containerd[1485]: time="2025-02-13T15:52:40.640539926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.640619 containerd[1485]: time="2025-02-13T15:52:40.640574453Z" level=info msg="RemovePodSandbox \"3a4ea214d29a9c3c19eae62f74f615a83f04e59deca07529a3959ac6b331422b\" returns successfully" Feb 13 15:52:40.640919 containerd[1485]: time="2025-02-13T15:52:40.640807230Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:40.641028 containerd[1485]: time="2025-02-13T15:52:40.640996044Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:40.641028 containerd[1485]: time="2025-02-13T15:52:40.641012766Z" level=info msg="StopPodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:40.641296 containerd[1485]: time="2025-02-13T15:52:40.641253179Z" level=info msg="RemovePodSandbox for \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:40.641296 containerd[1485]: time="2025-02-13T15:52:40.641273308Z" level=info msg="Forcibly stopping sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\"" Feb 13 15:52:40.641369 containerd[1485]: time="2025-02-13T15:52:40.641345217Z" level=info msg="TearDown network for sandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" successfully" Feb 13 15:52:40.646681 containerd[1485]: time="2025-02-13T15:52:40.646642579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.646783 containerd[1485]: time="2025-02-13T15:52:40.646696022Z" level=info msg="RemovePodSandbox \"24f586fa5a6aec2872b7082331f72c8ed7fb753042673e71aede6259db23de0e\" returns successfully" Feb 13 15:52:40.647073 containerd[1485]: time="2025-02-13T15:52:40.647015748Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:40.647173 containerd[1485]: time="2025-02-13T15:52:40.647143003Z" level=info msg="TearDown network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" successfully" Feb 13 15:52:40.647173 containerd[1485]: time="2025-02-13T15:52:40.647163272Z" level=info msg="StopPodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" returns successfully" Feb 13 15:52:40.647507 containerd[1485]: time="2025-02-13T15:52:40.647482046Z" level=info msg="RemovePodSandbox for \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:40.647546 containerd[1485]: time="2025-02-13T15:52:40.647513997Z" level=info msg="Forcibly stopping sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\"" Feb 13 15:52:40.647645 containerd[1485]: time="2025-02-13T15:52:40.647596266Z" level=info msg="TearDown network for sandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" successfully" Feb 13 15:52:40.651908 containerd[1485]: time="2025-02-13T15:52:40.651866099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.652285 containerd[1485]: time="2025-02-13T15:52:40.651966883Z" level=info msg="RemovePodSandbox \"26ffc5fde004fa27ea099edb72494831e3aeb618bfc053bb51bdc25e463b376e\" returns successfully" Feb 13 15:52:40.652943 containerd[1485]: time="2025-02-13T15:52:40.652396491Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" Feb 13 15:52:40.652943 containerd[1485]: time="2025-02-13T15:52:40.652535699Z" level=info msg="TearDown network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" successfully" Feb 13 15:52:40.652943 containerd[1485]: time="2025-02-13T15:52:40.652548995Z" level=info msg="StopPodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" returns successfully" Feb 13 15:52:40.653248 containerd[1485]: time="2025-02-13T15:52:40.653230387Z" level=info msg="RemovePodSandbox for \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" Feb 13 15:52:40.653332 containerd[1485]: time="2025-02-13T15:52:40.653313226Z" level=info msg="Forcibly stopping sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\"" Feb 13 15:52:40.653487 containerd[1485]: time="2025-02-13T15:52:40.653449679Z" level=info msg="TearDown network for sandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" successfully" Feb 13 15:52:40.658451 containerd[1485]: time="2025-02-13T15:52:40.658423258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.658515 containerd[1485]: time="2025-02-13T15:52:40.658465861Z" level=info msg="RemovePodSandbox \"974f2e17d153fc4e2c7a7c80c26980ce937174687900b289462434d666fa80f4\" returns successfully" Feb 13 15:52:40.658809 containerd[1485]: time="2025-02-13T15:52:40.658778111Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\"" Feb 13 15:52:40.658922 containerd[1485]: time="2025-02-13T15:52:40.658869658Z" level=info msg="TearDown network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" successfully" Feb 13 15:52:40.658922 containerd[1485]: time="2025-02-13T15:52:40.658908683Z" level=info msg="StopPodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" returns successfully" Feb 13 15:52:40.659287 containerd[1485]: time="2025-02-13T15:52:40.659242716Z" level=info msg="RemovePodSandbox for \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\"" Feb 13 15:52:40.659287 containerd[1485]: time="2025-02-13T15:52:40.659270890Z" level=info msg="Forcibly stopping sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\"" Feb 13 15:52:40.659395 containerd[1485]: time="2025-02-13T15:52:40.659353560Z" level=info msg="TearDown network for sandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" successfully" Feb 13 15:52:40.663109 containerd[1485]: time="2025-02-13T15:52:40.663075277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.663161 containerd[1485]: time="2025-02-13T15:52:40.663127128Z" level=info msg="RemovePodSandbox \"6866814210b1f4e770088dd2fa2c9a53cd4703acb433bff5f71bc8017784a2d9\" returns successfully" Feb 13 15:52:40.663526 containerd[1485]: time="2025-02-13T15:52:40.663376719Z" level=info msg="StopPodSandbox for \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\"" Feb 13 15:52:40.663526 containerd[1485]: time="2025-02-13T15:52:40.663463696Z" level=info msg="TearDown network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" successfully" Feb 13 15:52:40.663526 containerd[1485]: time="2025-02-13T15:52:40.663473996Z" level=info msg="StopPodSandbox for \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" returns successfully" Feb 13 15:52:40.663720 containerd[1485]: time="2025-02-13T15:52:40.663693759Z" level=info msg="RemovePodSandbox for \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\"" Feb 13 15:52:40.663755 containerd[1485]: time="2025-02-13T15:52:40.663718056Z" level=info msg="Forcibly stopping sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\"" Feb 13 15:52:40.663851 containerd[1485]: time="2025-02-13T15:52:40.663799863Z" level=info msg="TearDown network for sandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" successfully" Feb 13 15:52:40.667602 containerd[1485]: time="2025-02-13T15:52:40.667462518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.667602 containerd[1485]: time="2025-02-13T15:52:40.667509618Z" level=info msg="RemovePodSandbox \"41075bd5c5a54df827bd934f566aa5fa060a9e1df7e2165d52f5a0722dff2099\" returns successfully" Feb 13 15:52:40.668593 containerd[1485]: time="2025-02-13T15:52:40.668511077Z" level=info msg="StopPodSandbox for \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\"" Feb 13 15:52:40.669029 containerd[1485]: time="2025-02-13T15:52:40.668921908Z" level=info msg="TearDown network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\" successfully" Feb 13 15:52:40.669029 containerd[1485]: time="2025-02-13T15:52:40.668939763Z" level=info msg="StopPodSandbox for \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\" returns successfully" Feb 13 15:52:40.671345 containerd[1485]: time="2025-02-13T15:52:40.669343480Z" level=info msg="RemovePodSandbox for \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\"" Feb 13 15:52:40.671345 containerd[1485]: time="2025-02-13T15:52:40.669365623Z" level=info msg="Forcibly stopping sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\"" Feb 13 15:52:40.671345 containerd[1485]: time="2025-02-13T15:52:40.669457400Z" level=info msg="TearDown network for sandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\" successfully" Feb 13 15:52:40.674414 containerd[1485]: time="2025-02-13T15:52:40.674386293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:52:40.674505 containerd[1485]: time="2025-02-13T15:52:40.674418153Z" level=info msg="RemovePodSandbox \"6b9c49696b61cbc1a898b97dcb37cdb949d349feb359026807f0484fdfdcb5ed\" returns successfully" Feb 13 15:52:41.388738 kubelet[2674]: I0213 15:52:41.388709 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:52:41.391120 kubelet[2674]: I0213 15:52:41.391101 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:52:42.206161 containerd[1485]: time="2025-02-13T15:52:42.206108382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:42.207022 containerd[1485]: time="2025-02-13T15:52:42.206934771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 15:52:42.207986 containerd[1485]: time="2025-02-13T15:52:42.207950004Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:42.210336 containerd[1485]: time="2025-02-13T15:52:42.210299853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:42.211024 containerd[1485]: time="2025-02-13T15:52:42.210989319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.056612132s" Feb 13 15:52:42.211073 containerd[1485]: time="2025-02-13T15:52:42.211027232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 15:52:42.211628 containerd[1485]: time="2025-02-13T15:52:42.211561259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:52:42.221828 containerd[1485]: time="2025-02-13T15:52:42.221642940Z" level=info msg="CreateContainer within sandbox \"5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:52:42.237242 containerd[1485]: time="2025-02-13T15:52:42.237197566Z" level=info msg="CreateContainer within sandbox \"5147c48009cae53149b18ea4970a52face382505e3f841febf32b38c4644c513\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7c095398f665a2d99563e01cb5bfb5056197c6b335aa1a12d8ad68ff5fb198bb\"" Feb 13 15:52:42.237978 containerd[1485]: time="2025-02-13T15:52:42.237886451Z" level=info msg="StartContainer for \"7c095398f665a2d99563e01cb5bfb5056197c6b335aa1a12d8ad68ff5fb198bb\"" Feb 13 15:52:42.265235 systemd[1]: Started cri-containerd-7c095398f665a2d99563e01cb5bfb5056197c6b335aa1a12d8ad68ff5fb198bb.scope - libcontainer container 7c095398f665a2d99563e01cb5bfb5056197c6b335aa1a12d8ad68ff5fb198bb. Feb 13 15:52:42.307182 containerd[1485]: time="2025-02-13T15:52:42.307140487Z" level=info msg="StartContainer for \"7c095398f665a2d99563e01cb5bfb5056197c6b335aa1a12d8ad68ff5fb198bb\" returns successfully" Feb 13 15:52:42.487626 kubelet[2674]: I0213 15:52:42.487093 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68d59db744-jwpsr" podStartSLOduration=27.617407353 podStartE2EDuration="35.487041892s" podCreationTimestamp="2025-02-13 15:52:07 +0000 UTC" firstStartedPulling="2025-02-13 15:52:34.34172918 +0000 UTC m=+53.997104889" lastFinishedPulling="2025-02-13 15:52:42.211363729 +0000 UTC m=+61.866739428" observedRunningTime="2025-02-13 15:52:42.4866194 +0000 UTC m=+62.141995109" watchObservedRunningTime="2025-02-13 15:52:42.487041892 +0000 UTC m=+62.142417602" Feb 13 15:52:42.487626 kubelet[2674]: I0213 15:52:42.487277 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7db6857c7b-q5kq7" podStartSLOduration=29.725595194 podStartE2EDuration="35.487257597s" podCreationTimestamp="2025-02-13 15:52:07 +0000 UTC" firstStartedPulling="2025-02-13 15:52:33.89381263 +0000 UTC m=+53.549188339" lastFinishedPulling="2025-02-13 15:52:39.655475033 +0000 UTC m=+59.310850742" observedRunningTime="2025-02-13 15:52:40.402543425 +0000 UTC m=+60.057919124" watchObservedRunningTime="2025-02-13 15:52:42.487257597 +0000 UTC m=+62.142633306" Feb 13 15:52:43.969827 containerd[1485]: time="2025-02-13T15:52:43.969774756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:43.970610 containerd[1485]: time="2025-02-13T15:52:43.970569523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:52:43.971774 containerd[1485]: time="2025-02-13T15:52:43.971716147Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:43.973782 containerd[1485]: time="2025-02-13T15:52:43.973753002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:43.974447 containerd[1485]: time="2025-02-13T15:52:43.974401538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.762801564s" Feb 13 15:52:43.974447 containerd[1485]: time="2025-02-13T15:52:43.974444180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:52:43.976220 containerd[1485]: time="2025-02-13T15:52:43.976185397Z" level=info msg="CreateContainer within sandbox \"093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:52:43.989866 containerd[1485]: time="2025-02-13T15:52:43.989830574Z" level=info msg="CreateContainer within sandbox \"093a7de41aebed2185b80b57c0d1787459ad5d691c6afb3c58645441a2c66148\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3a577ca5f9569f8ccd33baf25f370d8429cd01c4b692aaf14b917038d3ae784a\"" Feb 13 15:52:43.990268 containerd[1485]: time="2025-02-13T15:52:43.990230002Z" level=info msg="StartContainer for \"3a577ca5f9569f8ccd33baf25f370d8429cd01c4b692aaf14b917038d3ae784a\"" Feb 13 15:52:44.023199 systemd[1]: Started cri-containerd-3a577ca5f9569f8ccd33baf25f370d8429cd01c4b692aaf14b917038d3ae784a.scope - libcontainer container 3a577ca5f9569f8ccd33baf25f370d8429cd01c4b692aaf14b917038d3ae784a. Feb 13 15:52:44.057537 containerd[1485]: time="2025-02-13T15:52:44.057483336Z" level=info msg="StartContainer for \"3a577ca5f9569f8ccd33baf25f370d8429cd01c4b692aaf14b917038d3ae784a\" returns successfully" Feb 13 15:52:44.490577 kubelet[2674]: I0213 15:52:44.490531 2674 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:52:44.491655 kubelet[2674]: I0213 15:52:44.491629 2674 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:52:44.636420 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:49586.service - OpenSSH per-connection server daemon (10.0.0.1:49586). Feb 13 15:52:44.690949 sshd[6294]: Accepted publickey for core from 10.0.0.1 port 49586 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:44.692823 sshd-session[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:44.697035 systemd-logind[1471]: New session 18 of user core. Feb 13 15:52:44.712177 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:52:44.838710 sshd[6296]: Connection closed by 10.0.0.1 port 49586 Feb 13 15:52:44.839014 sshd-session[6294]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:44.843564 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:49586.service: Deactivated successfully. Feb 13 15:52:44.845795 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:52:44.846507 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:52:44.847565 systemd-logind[1471]: Removed session 18. Feb 13 15:52:49.850735 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:45198.service - OpenSSH per-connection server daemon (10.0.0.1:45198). Feb 13 15:52:49.893278 sshd[6308]: Accepted publickey for core from 10.0.0.1 port 45198 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:49.894740 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:49.898964 systemd-logind[1471]: New session 19 of user core. Feb 13 15:52:49.905204 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:52:50.012691 sshd[6310]: Connection closed by 10.0.0.1 port 45198 Feb 13 15:52:50.013144 sshd-session[6308]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:50.028110 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:45198.service: Deactivated successfully. Feb 13 15:52:50.029978 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:52:50.031492 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:52:50.032892 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:45200.service - OpenSSH per-connection server daemon (10.0.0.1:45200). Feb 13 15:52:50.033870 systemd-logind[1471]: Removed session 19. Feb 13 15:52:50.072825 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 45200 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:50.074087 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:50.077982 systemd-logind[1471]: New session 20 of user core. Feb 13 15:52:50.087177 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:52:50.350324 sshd[6324]: Connection closed by 10.0.0.1 port 45200 Feb 13 15:52:50.350701 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:50.367067 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:45200.service: Deactivated successfully. Feb 13 15:52:50.368988 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:52:50.370605 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:52:50.372536 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:45214.service - OpenSSH per-connection server daemon (10.0.0.1:45214). Feb 13 15:52:50.373610 systemd-logind[1471]: Removed session 20. Feb 13 15:52:50.427405 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 45214 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:50.428813 sshd-session[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:50.432737 systemd-logind[1471]: New session 21 of user core. Feb 13 15:52:50.448178 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:52:51.334726 kubelet[2674]: E0213 15:52:51.334699 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:52:51.349137 kubelet[2674]: I0213 15:52:51.349083 2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-g6vd2" podStartSLOduration=34.07803874 podStartE2EDuration="44.349017963s" podCreationTimestamp="2025-02-13 15:52:07 +0000 UTC" firstStartedPulling="2025-02-13 15:52:33.703719737 +0000 UTC m=+53.359095446" lastFinishedPulling="2025-02-13 15:52:43.97469896 +0000 UTC m=+63.630074669" observedRunningTime="2025-02-13 15:52:44.413807418 +0000 UTC m=+64.069183127" watchObservedRunningTime="2025-02-13 15:52:51.349017963 +0000 UTC m=+71.004393682" Feb 13 15:52:52.415620 sshd[6336]: Connection closed by 10.0.0.1 port 45214 Feb 13 15:52:52.416538 sshd-session[6334]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:52.426729 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:45214.service: Deactivated successfully. Feb 13 15:52:52.429741 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:52:52.430534 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:52:52.438596 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:45230.service - OpenSSH per-connection server daemon (10.0.0.1:45230). Feb 13 15:52:52.439441 systemd-logind[1471]: Removed session 21. Feb 13 15:52:52.502488 sshd[6377]: Accepted publickey for core from 10.0.0.1 port 45230 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:52.504702 sshd-session[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:52.514326 systemd-logind[1471]: New session 22 of user core. Feb 13 15:52:52.526280 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:52:52.848756 sshd[6379]: Connection closed by 10.0.0.1 port 45230 Feb 13 15:52:52.848230 sshd-session[6377]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:52.880271 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:45230.service: Deactivated successfully. Feb 13 15:52:52.886642 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:52:52.890459 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:52:52.907334 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:45232.service - OpenSSH per-connection server daemon (10.0.0.1:45232). Feb 13 15:52:52.909258 systemd-logind[1471]: Removed session 22. Feb 13 15:52:52.983999 sshd[6390]: Accepted publickey for core from 10.0.0.1 port 45232 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:52.986688 sshd-session[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:53.005475 systemd-logind[1471]: New session 23 of user core. Feb 13 15:52:53.018219 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:52:53.228093 sshd[6392]: Connection closed by 10.0.0.1 port 45232 Feb 13 15:52:53.230692 sshd-session[6390]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:53.233888 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:52:53.238800 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:45232.service: Deactivated successfully. Feb 13 15:52:53.243634 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:52:53.258270 systemd-logind[1471]: Removed session 23. Feb 13 15:52:54.581258 kubelet[2674]: I0213 15:52:54.579634 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:52:58.245315 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:38944.service - OpenSSH per-connection server daemon (10.0.0.1:38944). Feb 13 15:52:58.286058 sshd[6436]: Accepted publickey for core from 10.0.0.1 port 38944 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:52:58.287421 sshd-session[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:58.291976 systemd-logind[1471]: New session 24 of user core. Feb 13 15:52:58.299279 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:52:58.604034 sshd[6438]: Connection closed by 10.0.0.1 port 38944 Feb 13 15:52:58.604345 sshd-session[6436]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:58.608651 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:38944.service: Deactivated successfully. Feb 13 15:52:58.610758 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:52:58.611344 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:52:58.612174 systemd-logind[1471]: Removed session 24. Feb 13 15:52:59.423920 kubelet[2674]: E0213 15:52:59.423885 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:03.424194 kubelet[2674]: E0213 15:53:03.424145 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:03.619454 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:38954.service - OpenSSH per-connection server daemon (10.0.0.1:38954). Feb 13 15:53:03.672930 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 38954 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:53:03.674514 sshd-session[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:03.678985 systemd-logind[1471]: New session 25 of user core. Feb 13 15:53:03.691178 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:53:03.848030 sshd[6473]: Connection closed by 10.0.0.1 port 38954 Feb 13 15:53:03.848426 sshd-session[6471]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:03.852722 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:38954.service: Deactivated successfully. Feb 13 15:53:03.855204 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:53:03.856227 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:53:03.857237 systemd-logind[1471]: Removed session 25. Feb 13 15:53:05.423480 kubelet[2674]: E0213 15:53:05.423438 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:08.866371 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:32964.service - OpenSSH per-connection server daemon (10.0.0.1:32964). Feb 13 15:53:08.902432 sshd[6491]: Accepted publickey for core from 10.0.0.1 port 32964 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:53:08.904320 sshd-session[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:08.908733 systemd-logind[1471]: New session 26 of user core. Feb 13 15:53:08.915172 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:53:09.026209 sshd[6493]: Connection closed by 10.0.0.1 port 32964 Feb 13 15:53:09.026578 sshd-session[6491]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:09.030714 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:32964.service: Deactivated successfully. Feb 13 15:53:09.032746 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:53:09.033500 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:53:09.034486 systemd-logind[1471]: Removed session 26. Feb 13 15:53:14.043407 systemd[1]: Started sshd@26-10.0.0.80:22-10.0.0.1:32966.service - OpenSSH per-connection server daemon (10.0.0.1:32966). Feb 13 15:53:14.100009 sshd[6511]: Accepted publickey for core from 10.0.0.1 port 32966 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:53:14.101745 sshd-session[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:14.105643 systemd-logind[1471]: New session 27 of user core. Feb 13 15:53:14.113177 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:53:14.227906 sshd[6513]: Connection closed by 10.0.0.1 port 32966 Feb 13 15:53:14.228293 sshd-session[6511]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:14.231900 systemd[1]: sshd@26-10.0.0.80:22-10.0.0.1:32966.service: Deactivated successfully. Feb 13 15:53:14.233783 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:53:14.234619 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:53:14.235490 systemd-logind[1471]: Removed session 27. Feb 13 15:53:14.737517 kubelet[2674]: I0213 15:53:14.737464 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:53:19.240415 systemd[1]: Started sshd@27-10.0.0.80:22-10.0.0.1:47466.service - OpenSSH per-connection server daemon (10.0.0.1:47466). Feb 13 15:53:19.283330 sshd[6529]: Accepted publickey for core from 10.0.0.1 port 47466 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:53:19.284978 sshd-session[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:19.288749 systemd-logind[1471]: New session 28 of user core. Feb 13 15:53:19.304168 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:53:19.421953 sshd[6532]: Connection closed by 10.0.0.1 port 47466 Feb 13 15:53:19.422325 sshd-session[6529]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:19.423154 kubelet[2674]: E0213 15:53:19.423126 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:19.426870 systemd[1]: sshd@27-10.0.0.80:22-10.0.0.1:47466.service: Deactivated successfully. Feb 13 15:53:19.429057 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:53:19.429627 systemd-logind[1471]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:53:19.430399 systemd-logind[1471]: Removed session 28.